get:
Show a patch.

patch:
Update a patch.

put:
Update a patch.

GET /api/patches/44415/?format=api
HTTP 200 OK
Allow: GET, PUT, PATCH, HEAD, OPTIONS
Content-Type: application/json
Vary: Accept

{
    "id": 44415,
    "url": "http://patches.dpdk.org/api/patches/44415/?format=api",
    "web_url": "http://patches.dpdk.org/project/dpdk/patch/1536333719-32155-21-git-send-email-igor.russkikh@aquantia.com/",
    "project": {
        "id": 1,
        "url": "http://patches.dpdk.org/api/projects/1/?format=api",
        "name": "DPDK",
        "link_name": "dpdk",
        "list_id": "dev.dpdk.org",
        "list_email": "dev@dpdk.org",
        "web_url": "http://core.dpdk.org",
        "scm_url": "git://dpdk.org/dpdk",
        "webscm_url": "http://git.dpdk.org/dpdk",
        "list_archive_url": "https://inbox.dpdk.org/dev",
        "list_archive_url_format": "https://inbox.dpdk.org/dev/{}",
        "commit_url_format": ""
    },
    "msgid": "<1536333719-32155-21-git-send-email-igor.russkikh@aquantia.com>",
    "list_archive_url": "https://inbox.dpdk.org/dev/1536333719-32155-21-git-send-email-igor.russkikh@aquantia.com",
    "date": "2018-09-07T15:21:58",
    "name": "[20/21] net/atlantic: RX side structures and implementation",
    "commit_ref": null,
    "pull_url": null,
    "state": "superseded",
    "archived": true,
    "hash": "3b26a1f7659ee2e11b4ea10c8ad0837468328f1f",
    "submitter": {
        "id": 1124,
        "url": "http://patches.dpdk.org/api/people/1124/?format=api",
        "name": "Igor Russkikh",
        "email": "igor.russkikh@aquantia.com"
    },
    "delegate": {
        "id": 319,
        "url": "http://patches.dpdk.org/api/users/319/?format=api",
        "username": "fyigit",
        "first_name": "Ferruh",
        "last_name": "Yigit",
        "email": "ferruh.yigit@amd.com"
    },
    "mbox": "http://patches.dpdk.org/project/dpdk/patch/1536333719-32155-21-git-send-email-igor.russkikh@aquantia.com/mbox/",
    "series": [
        {
            "id": 1228,
            "url": "http://patches.dpdk.org/api/series/1228/?format=api",
            "web_url": "http://patches.dpdk.org/project/dpdk/list/?series=1228",
            "date": "2018-09-07T15:21:39",
            "name": "net/atlantic: Aquantia aQtion 10G NIC Family DPDK PMD driver",
            "version": 1,
            "mbox": "http://patches.dpdk.org/series/1228/mbox/"
        }
    ],
    "comments": "http://patches.dpdk.org/api/patches/44415/comments/",
    "check": "fail",
    "checks": "http://patches.dpdk.org/api/patches/44415/checks/",
    "tags": {},
    "related": [],
    "headers": {
        "Return-Path": "<dev-bounces@dpdk.org>",
        "X-Original-To": "patchwork@dpdk.org",
        "Delivered-To": "patchwork@dpdk.org",
        "Received": [
            "from [92.243.14.124] (localhost [127.0.0.1])\n\tby dpdk.org (Postfix) with ESMTP id B60B37D52;\n\tFri,  7 Sep 2018 17:23:58 +0200 (CEST)",
            "from NAM03-BY2-obe.outbound.protection.outlook.com\n\t(mail-by2nam03on0076.outbound.protection.outlook.com [104.47.42.76])\n\tby dpdk.org (Postfix) with ESMTP id A294B5B40\n\tfor <dev@dpdk.org>; Fri,  7 Sep 2018 17:23:42 +0200 (CEST)",
            "from ubuntubox.rdc.aquantia.com (95.79.108.179) by\n\tBLUPR0701MB1652.namprd07.prod.outlook.com (2a01:111:e400:58c6::22)\n\twith Microsoft SMTP Server (version=TLS1_2,\n\tcipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.1122.16;\n\tFri, 7 Sep 2018 15:23:39 +0000"
        ],
        "DKIM-Signature": "v=1; a=rsa-sha256; c=relaxed/relaxed;\n\td=AQUANTIA1COM.onmicrosoft.com; s=selector1-aquantia-com;\n\th=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;\n\tbh=OO8LVuByRYCqStnWta2dI1miCja3NFisYjRQd8D5KD4=;\n\tb=tWzh8J99Q3Cs94q5glLFnxqMHJikz9vMikfu+CaNp2RHZS32GdxPhPcrCsNLM9C2oyWyhqkyYleu7e65O6pJ3t9AcS8CQftX8jOFxOBxePiz7WUH8SAxhsqo0hzS/2GlgtAP+91qQDn8KKAzevpbw1z09XUG5Zh948PqPeBmT0c=",
        "Authentication-Results": "spf=none (sender IP is )\n\tsmtp.mailfrom=Igor.Russkikh@aquantia.com; ",
        "From": "Igor Russkikh <igor.russkikh@aquantia.com>",
        "To": "dev@dpdk.org",
        "Cc": "pavel.belous@aquantia.com, Nadezhda.Krupnina@aquantia.com,\n\tigor.russkikh@aquantia.com, Simon.Edelhaus@aquantia.com,\n\tCorey Melton <comelton@cisco.com>, Ashish Kumar <ashishk2@cisco.com>",
        "Date": "Fri,  7 Sep 2018 18:21:58 +0300",
        "Message-Id": "<1536333719-32155-21-git-send-email-igor.russkikh@aquantia.com>",
        "X-Mailer": "git-send-email 2.7.4",
        "In-Reply-To": "<1536333719-32155-1-git-send-email-igor.russkikh@aquantia.com>",
        "References": "<1536333719-32155-1-git-send-email-igor.russkikh@aquantia.com>",
        "MIME-Version": "1.0",
        "Content-Type": "text/plain",
        "X-Originating-IP": "[95.79.108.179]",
        "X-ClientProxiedBy": "BN6PR20CA0060.namprd20.prod.outlook.com\n\t(2603:10b6:404:151::22) To BLUPR0701MB1652.namprd07.prod.outlook.com\n\t(2a01:111:e400:58c6::22)",
        "X-MS-PublicTrafficType": "Email",
        "X-MS-Office365-Filtering-Correlation-Id": "96feb35b-35bd-4fcd-5c4d-08d614d5e400",
        "X-Microsoft-Antispam": "BCL:0; PCL:0;\n\tRULEID:(7020095)(4652040)(8989137)(4534165)(4627221)(201703031133081)(201702281549075)(8990107)(5600074)(711020)(2017052603328)(7153060)(7193020);\n\tSRVR:BLUPR0701MB1652; ",
        "X-Microsoft-Exchange-Diagnostics": [
            "1; BLUPR0701MB1652;\n\t3:Ypr9nDCrLRFVLpp//ka+IcOpFPQ7qyr57u5J6I1eK286SqeMTxAWtU22dZb9kthVQCOZCgz5l9dl8yu74QTpV8RZ9TywphoPSaRWMl8W39uYCDan1YJW0CgMOXniQ+7tyAvAM+sg/tsWtak8ccHBFXhwctVyEK99XEVEUoy5PZQzoGe1EB4KOGWs7COMhl/72l/RNkNuG/sGCgN0nooEyBm+5VjpQerPh0qByD4Vv4FzlQ3LCZl5h44MlRfYMu0J;\n\t25:2RvrePWq0aZNEydMevrQfsQgGUM6FTWRCUqzRV17Yf8+FaVwtR3SD4p1nLEhiuTfuhjpdaOwVqM8NdSRE7isFTId1vQgwh2/bZ3OfdBu4TkZy5UgKJjLW12wFhmZG7+6O/eepSPLR4yxJ8sdyqCTun3d+utXuaARO2JFrhyJB3uQQW3ytgjNp7EbWSMm1ywvTm4g54ahhMtsft68B9oUrCAP814TM+ss6ElrC5HdfCNPXGnULmICpO8V5qpDrN/ch70JGFAwYGrn5ObwOBpyNR2Gfq1lnUMFFo1XXJrIH9ozfU4wyBDW4+NaTAsHY6VawW7RC0FLZx+woLEZ7EES+g==;\n\t31:sJr36jE46c8TkfF5yEeTPYC9ExVrNspNqwHCoMkAJytmH2w9J1SdnLW82AlDrog89bXYk5qfVlSTgp18acMbo3qpMlKq6sq7R+R29DlO1TLwImvBEKxiBQqxIhni/80nwCJj6aHzecTakJFd3OOB2HHanUJdKFx9/ch+5u3L48TEvINgO5k6TFsOr/uE7oplZvi1IY0ti6suohfX3+dYHT4R1VmMYgsvxFs4QrgoBr0=",
            "1; BLUPR0701MB1652;\n\t20:5TZlYkZ51YROBaGVsOmWGqFHr+ZU6jgaL+s1lBw8WB8XUa2jhd58TlyBOIXdBhy1uKjz3MSCq/IR5eNIIGyr3mVfNOvdP6+etArBNuR7+UzWww/tLpOyUYHsnaOLJ7d03z8xl1ivdKUewBH4ly2HlJplDldEVuQCvEQDBaH963Qn0qzeoE0JbTAmP4VG2wtZizPfWV4FybzC4uvpR17KvxOoxX0lFxjEMQJxqkqFGFUqMxrLUAxSh4j11NjX/Fub0UrQW6lIY1nJUTiBuPcfTt3hWG4P7MGp+c9V0g37m0wTHy6baiQSOvqQoJ3p9cVY43uSdcRrLTVH3TXozlL7Uq67kgbyqZK9bwGHX/HAZ7MYht0vJuxYe/XiZBUC9/mys7ore2Uqox926XGoKUd4H5jnJhcKzYLWUahebUqq14CZ3woBP+FPhxZ4l/Nhtc0I0B/0his6KBKeW4xOttTLXO96LYeHClmCZfvEnszbeNSQX2bnPKzkMjbDnc8Q9fhX;\n\t4:JwFveqZ6F8quj0jbxaYWg2F6oMjThOx8zQ8Y+uS1NMRrbC2GtkpSnTNS4B1/QdKqgmuhsuK9X509bcC+QCdKxfkBM63rtKs9SIArADwjsmcro3aAsYXhW1cB8EwPIUyg4xKAYbLJtuNdtlWtt/4L7I4rw3qt0BptFBJFAt5C6zs8nVt5fB5WcyF+Lf6VYhsOJQsT4mUrgzVNf7mi8a4p7UhY3Ev+MV2uEiP3krdyX4Lsn58VxIGEsNgV4XDNvGuB8eTIkJnP9pSQhnBYYgkEew==",
            "=?us-ascii?Q?1; BLUPR0701MB1652;\n\t23:dup4f//7o4XCWmLLVwnx/uWJu0CIsac5FH4S8Ca?=\n\taH8VflPIqQxZx0Anu5r42yI/GA+aiM/rqXHGEp4VEImuWUZ872H2pExr1NfUerqNn7QrzqLQhyLV7pagGU82Q+zyPnrvbW8QhvsRYpUOWt3csPpVGBk9vYW+lMOwFb2YP6onjx7EnvMHtzRzXIxGysKphQ7XUniPOt8YZDR3+oEWjd6vt+KdlNZtmnDnOAP5IrBnSd4ubwrFPA9s3WOyJGqHOc9KGtNxF/0Axazv4iLy+u/yQ8jTicUqIiHyCENoY457A+KLOrB5nedVP1HHMoewFfB9gU1qSIP9pPiJ7iUnFPKLOPmbmI3iWKv+4u0C/ybcVW/aZlXUFSvilxdG6+4GpdPm/nuPaCb46oVz1CGBA/H4WEXvQvyOI+m5/GfBeR9AKpJXjLCm9tyce8oAslvcEnHWaOAC88YvmMjuX7Lb6Fdb8fOF6KwNLPhgWKBZR1yogHRRcMd8Cy8ddOIkPRNK7ZcaoS7OUVy6/5e3cf2Ulf5JgA3BysRy0da64WoFlhEMUsED3XsrYHxubyD13U6SQl9XckCotgFVWVyTwQiDszuS0pNCr+VybzEwDHVIuGPQ/KuiK1yRSwWFPdsIkQ6X25G9/VHA4eB09smz9ZkgVZh3xdK/39AZZ4CDVVu9ythNKT9vxUsP70SUy85lxHYPiZ/9SsN3xuPVS6BZkrN2gGIFwbSzA4GNtu75/TopTImFnt7t3kW7WYA/UqT9VrgV7rSlEqJOLFYAH6Ewagba5iM4Oxg8Rj99YUH+dlSnNM/XgJjW4GGcEyqQMc1eSlyKR/Xk6tfFdFAEIFLB1dTEaFIfedZhAgvhG5uDlpyYz9PAgFms1FQIsyqmiuyvz/E+/YapPQauY0b0qKkcodjkjB3xPhlUyzCeSK3112Xycgmmc8C7jYUUhTFyXq1HyGP0R/CQYJxo6hricEPuQU3dPNfovlXg+k3mFXCbhE34PgQcTp9vknxm8LDYA+zdOw294CmCioAs/Q141pNw3eDlWs6zo9FuBEQ3r/GAtD6qyd+P+P7t8XH7gSt1LA9JcKbD/FzESeSzLVJN/hGrhzG9YB/Pi890CcA7MpqtFRSGuVXsMkFSmOTxRMqTbcDcYbxXo8U75tGRc9oDGm7nYHKTZb3EhELC68Hd6/G1JYnKdmFiGyshWMg46S7fePxkQc1h5sapgDDCmqg/ocg09NJkL0EwydzjR5JRNdb2rLdpJigTSkZZezWJyK7IwKQwwDYaXtZRgKazdfQkkc5mOhAS4OQ==",
            "1; BLUPR0701MB1652;\n\t6:WH/aGVDYPuH1xx0258UQwYS19SQU4eOBr6lJclcHKUp5Bg4zt5Qx9dabH2ui46PCI6Or1qlwE0b76R04wbETAvfo3R1hyDVu+TbsHz0TSEaP03hc1NIDd4TXjYeKHDD6tq0LfqJoH8zjKGZx4BucEl00P/kJc61rikT47xSejSi/9uclTdAQ8wAzMfYVJM+VeVxVWg4epTW1m5VG4vESuIbB+AW7nUy0T05SG821cqbPAPChnYsQRoy5P9Bc5tGEM67AZhM2tORssz6Yhh3GZTHULRboXWbfnc5cW149lTrxBX1ww5nX7nUU3sOsAj4s2mF2u0LDVBjSvMt96eg3qZ3TR9FFNm/VloTeq/aZ7Xko9Bupw372vx55Az2r5i1zM+Nro8x0WDsEuBq7AFJe9wdW/G7nfwalnEk1IM3kKZ5UVWi943Px1GbNM8g9oExjcKwEqDX+hZq2rhpeVK9FKg==;\n\t5:MrsR1EGqo1zcq6iOg6APE2HCbvO6/6TOhdOyYQht75SuZNR5B1DRIqH++3XDkDJFlRsxUB0E2wSFnGLm5M/a8oa+r9l1f/EUUIHspGxmfKPBVCMbtzNynarfKfPNKIh4b0OFzDF9u4zb0l2Beow0WgA1UjMUqkiaDVj1AuCRSpo=;\n\t7:rIafuBqr90IkLZgmgN/4+Sk+nIEhK8UdguarvJx9c/wpQtRyViHAkJ2PZrCEdCCFaMvErTfPVthEFQTpOuZfsa5oZL06+ddYQEMjKIEDcZ3uNjzmhSozaMMLkmW9HV+HDHcYO/rR1Rk3EH/i9xuamQebkeATKX/nVU+IEk9R1EIPw89eIXA68pzF40DEWD8Mq26W03HUbKy4Ex6eiwtDhzFyxGe2DkwAOV+WYKrOVBu+BELLjE7On28/VpmkR45o"
        ],
        "X-MS-TrafficTypeDiagnostic": "BLUPR0701MB1652:",
        "X-Microsoft-Antispam-PRVS": "<BLUPR0701MB165295022909358DE970C4CB98000@BLUPR0701MB1652.namprd07.prod.outlook.com>",
        "X-Exchange-Antispam-Report-Test": "UriScan:;",
        "X-MS-Exchange-SenderADCheck": "1",
        "X-Exchange-Antispam-Report-CFA-Test": "BCL:0; PCL:0;\n\tRULEID:(6040522)(2401047)(5005006)(8121501046)(93006095)(93001095)(3231311)(944501410)(52105095)(10201501046)(3002001)(149027)(150027)(6041310)(201703131423095)(201702281528075)(20161123555045)(201703061421075)(201703061406153)(20161123558120)(20161123562045)(20161123564045)(20161123560045)(201708071742011)(7699050);\n\tSRVR:BLUPR0701MB1652; BCL:0; PCL:0; RULEID:; SRVR:BLUPR0701MB1652; ",
        "X-Forefront-PRVS": "07880C4932",
        "X-Forefront-Antispam-Report": "SFV:NSPM;\n\tSFS:(10009020)(39850400004)(396003)(376002)(366004)(346002)(136003)(189003)(199004)(81156014)(81166006)(44832011)(8936002)(2351001)(486006)(106356001)(2361001)(105586002)(446003)(2906002)(50226002)(476003)(11346002)(956004)(2616005)(305945005)(7736002)(16586007)(316002)(54906003)(14444005)(16526019)(66066001)(26005)(7696005)(52116002)(8676002)(51416003)(76176011)(186003)(386003)(68736007)(478600001)(6666003)(6916009)(6486002)(5660300001)(53936002)(47776003)(4326008)(25786009)(86362001)(48376002)(50466002)(575784001)(72206003)(6116002)(97736004)(36756003)(3846002);\n\tDIR:OUT; SFP:1101; SCL:1; SRVR:BLUPR0701MB1652;\n\tH:ubuntubox.rdc.aquantia.com; \n\tFPR:; SPF:None; LANG:en; PTR:InfoNoRecords; MX:1; A:1; ",
        "Received-SPF": "None (protection.outlook.com: aquantia.com does not designate\n\tpermitted sender hosts)",
        "X-Microsoft-Antispam-Message-Info": "e0DKk7N7xt/+7262IszTEdr8nw5l9e4OvawZO9ZVb25MfZwDYBzSHsziNUFnTxfBJvtBD9P9iFHYAtQGjpogwQip461LDO8eWiO12exHmTT6nnYMx0z30YxuMLKxhp8hJzEsNCEPC1r2KiUrR9y0wxofNfNX0RaWnHnqmU27t6OY2tpyJb9ie3UR9YqBhTTBmrZ3k8D/fvXy9242P9QD3SeT/+p052VYuYL9dsjSZBkryHHE9ChIohtlTA6uAyC0hZqMIn5zNAL+tfCzJd45vyH6YWDvK7GPYixFBZT2irhB02vUBFOIIk42RxMZKPARPU/y2C98X1UA8KS3CXYSWyJWWpUM3OA9/Q+bHr8NqBQ=",
        "SpamDiagnosticOutput": "1:99",
        "SpamDiagnosticMetadata": "NSPM",
        "X-OriginatorOrg": "aquantia.com",
        "X-MS-Exchange-CrossTenant-OriginalArrivalTime": "07 Sep 2018 15:23:39.1560\n\t(UTC)",
        "X-MS-Exchange-CrossTenant-Network-Message-Id": "96feb35b-35bd-4fcd-5c4d-08d614d5e400",
        "X-MS-Exchange-CrossTenant-FromEntityHeader": "Hosted",
        "X-MS-Exchange-CrossTenant-Id": "83e2e134-991c-4ede-8ced-34d47e38e6b1",
        "X-MS-Exchange-Transport-CrossTenantHeadersStamped": "BLUPR0701MB1652",
        "Subject": "[dpdk-dev] [PATCH 20/21] net/atlantic: RX side structures and\n\timplementation",
        "X-BeenThere": "dev@dpdk.org",
        "X-Mailman-Version": "2.1.15",
        "Precedence": "list",
        "List-Id": "DPDK patches and discussions <dev.dpdk.org>",
        "List-Unsubscribe": "<https://mails.dpdk.org/options/dev>,\n\t<mailto:dev-request@dpdk.org?subject=unsubscribe>",
        "List-Archive": "<http://mails.dpdk.org/archives/dev/>",
        "List-Post": "<mailto:dev@dpdk.org>",
        "List-Help": "<mailto:dev-request@dpdk.org?subject=help>",
        "List-Subscribe": "<https://mails.dpdk.org/listinfo/dev>,\n\t<mailto:dev-request@dpdk.org?subject=subscribe>",
        "Errors-To": "dev-bounces@dpdk.org",
        "Sender": "\"dev\" <dev-bounces@dpdk.org>"
    },
    "content": "Signed-off-by: Igor Russkikh <igor.russkikh@aquantia.com>\n---\n drivers/net/atlantic/Makefile     |   6 +\n drivers/net/atlantic/atl_ethdev.c |  44 +++\n drivers/net/atlantic/atl_ethdev.h |  80 ++++\n drivers/net/atlantic/atl_rxtx.c   | 758 ++++++++++++++++++++++++++++++++++++++\n drivers/net/atlantic/meson.build  |   6 +\n 5 files changed, 894 insertions(+)\n create mode 100644 drivers/net/atlantic/atl_rxtx.c",
    "diff": "diff --git a/drivers/net/atlantic/Makefile b/drivers/net/atlantic/Makefile\nindex 4b0e6f06c..7b11246d1 100644\n--- a/drivers/net/atlantic/Makefile\n+++ b/drivers/net/atlantic/Makefile\n@@ -54,7 +54,13 @@ VPATH += $(SRCDIR)/hw_atl\n #\n # all source are stored in SRCS-y\n #\n+SRCS-$(CONFIG_RTE_LIBRTE_ATLANTIC_PMD) += atl_rxtx.c\n SRCS-$(CONFIG_RTE_LIBRTE_ATLANTIC_PMD) += atl_ethdev.c\n+SRCS-$(CONFIG_RTE_LIBRTE_ATLANTIC_PMD) += atl_hw_regs.c\n+SRCS-$(CONFIG_RTE_LIBRTE_ATLANTIC_PMD) += hw_atl_utils.c\n+SRCS-$(CONFIG_RTE_LIBRTE_ATLANTIC_PMD) += hw_atl_llh.c\n+SRCS-$(CONFIG_RTE_LIBRTE_ATLANTIC_PMD) += hw_atl_utils_fw2x.c\n+SRCS-$(CONFIG_RTE_LIBRTE_ATLANTIC_PMD) += hw_atl_b0.c\n SRCS-$(CONFIG_RTE_LIBRTE_ATLANTIC_PMD) += rte_pmd_atlantic.c\n \n # install this header file\ndiff --git a/drivers/net/atlantic/atl_ethdev.c b/drivers/net/atlantic/atl_ethdev.c\nindex 44cea12ab..bb4d96bb1 100644\n--- a/drivers/net/atlantic/atl_ethdev.c\n+++ b/drivers/net/atlantic/atl_ethdev.c\n@@ -277,6 +277,8 @@ static const struct eth_dev_ops atl_eth_dev_ops = {\n \t.xstats_get_names     = atl_dev_xstats_get_names,\n \t.stats_reset\t      = atl_dev_stats_reset,\n \t.xstats_reset\t      = atl_dev_xstats_reset,\n+\t.queue_stats_mapping_set = atl_dev_queue_stats_mapping_set,\n+\n \t.fw_version_get       = atl_fw_version_get,\n \t.dev_infos_get\t      = atl_dev_info_get,\n \t.dev_supported_ptypes_get = atl_dev_supported_ptypes_get,\n@@ -289,6 +291,20 @@ static const struct eth_dev_ops atl_eth_dev_ops = {\n \t.vlan_tpid_set        = atl_vlan_tpid_set,\n \t.vlan_strip_queue_set = atl_vlan_strip_queue_set,\n \n+\t/* Queue Control */\n+\t.rx_queue_start\t      = atl_rx_queue_start,\n+\t.rx_queue_stop\t      = atl_rx_queue_stop,\n+\t.rx_queue_setup       = atl_rx_queue_setup,\n+\t.rx_queue_release     = atl_rx_queue_release,\n+\n+\t.rx_queue_intr_enable = atl_dev_rx_queue_intr_enable,\n+\t.rx_queue_intr_disable = atl_dev_rx_queue_intr_disable,\n+\n+\t.rx_queue_count       = atl_rx_queue_count,\n+\t.rx_descriptor_done   = atl_dev_rx_descriptor_done,\n+\t.rx_descriptor_status = atl_dev_rx_descriptor_status,\n+\t.tx_descriptor_status = atl_dev_tx_descriptor_status,\n+\n \t/* LEDs */\n \t.dev_led_on           = atl_dev_led_on,\n \t.dev_led_off          = atl_dev_led_off,\n@@ -307,6 +323,7 @@ static const struct eth_dev_ops atl_eth_dev_ops = {\n \t.mac_addr_remove      = atl_remove_mac_addr,\n \t.mac_addr_set\t      = atl_set_default_mac_addr,\n \t.set_mc_addr_list     = atl_dev_set_mc_addr_list,\n+\t.rxq_info_get\t      = atl_rxq_info_get,\n \t.reta_update          = atl_reta_update,\n \t.reta_query           = atl_reta_query,\n \t.rss_hash_update      = atl_rss_hash_update,\n@@ -622,6 +639,19 @@ atl_dev_start(struct rte_eth_dev *dev)\n \t\t}\n \t}\n \n+\t/* This can fail when allocating mbufs for descriptor rings */\n+\terr = atl_rx_init(dev);\n+\tif (err) {\n+\t\tPMD_INIT_LOG(ERR, \"Unable to initialize RX hardware\");\n+\t\tgoto error;\n+\t}\n+\n+\terr = atl_start_queues(dev);\n+\tif (err < 0) {\n+\t\tPMD_INIT_LOG(ERR, \"Unable to start rxtx queues\");\n+\t\tgoto error;\n+\t}\n+\n \terr = hw->aq_fw_ops->update_link_status(hw);\n \n \tif (err) {\n@@ -708,6 +738,9 @@ atl_dev_stop(struct rte_eth_dev *dev)\n \t/* reset the NIC */\n \tatl_reset_hw(hw);\n \thw->adapter_stopped = 0;\n+\n+\tatl_stop_queues(dev);\n+\n \t/* Clear stored conf */\n \tdev->data->scattered_rx = 0;\n \tdev->data->lro = 0;\n@@ -766,6 +799,8 @@ atl_dev_close(struct rte_eth_dev *dev)\n \n \tatl_dev_stop(dev);\n \thw->adapter_stopped = 1;\n+\n+\tatl_free_queues(dev);\n }\n \n static int\n@@ -865,6 +900,15 @@ atl_dev_xstats_reset(struct rte_eth_dev *dev __rte_unused)\n \treturn;\n }\n \n+static int\n+atl_dev_queue_stats_mapping_set(struct rte_eth_dev *eth_dev __rte_unused,\n+\t\t\t\t\t     uint16_t queue_id __rte_unused,\n+\t\t\t\t\t     uint8_t stat_idx __rte_unused,\n+\t\t\t\t\t     uint8_t is_rx __rte_unused)\n+{\n+\t/* The mapping is hardcoded: queue 0 -> stat 0, etc */\n+\treturn 0;\n+}\n \n static int\n atl_fw_version_get(struct rte_eth_dev *dev, char *fw_version, size_t fw_size)\ndiff --git a/drivers/net/atlantic/atl_ethdev.h b/drivers/net/atlantic/atl_ethdev.h\nindex 3ebef1f43..e532c43fa 100644\n--- a/drivers/net/atlantic/atl_ethdev.h\n+++ b/drivers/net/atlantic/atl_ethdev.h\n@@ -13,6 +13,17 @@\n #include \"atl_types.h\"\n #include \"hw_atl/hw_atl_utils.h\"\n \n+#define ATL_RSS_OFFLOAD_ALL ( \\\n+\tETH_RSS_IPV4 | \\\n+\tETH_RSS_NONFRAG_IPV4_TCP | \\\n+\tETH_RSS_NONFRAG_IPV4_UDP | \\\n+\tETH_RSS_IPV6 | \\\n+\tETH_RSS_NONFRAG_IPV6_TCP | \\\n+\tETH_RSS_NONFRAG_IPV6_UDP | \\\n+\tETH_RSS_IPV6_EX | \\\n+\tETH_RSS_IPV6_TCP_EX | \\\n+\tETH_RSS_IPV6_UDP_EX)\n+\n #define ATL_FLAG_NEED_LINK_UPDATE (uint32_t)(1 << 0)\n #define ATL_FLAG_NEED_LINK_CONFIG (uint32_t)(4 << 0)\n \n@@ -46,6 +57,75 @@ struct atl_adapter {\n #define ATL_DEV_PRIVATE_TO_CFG(adapter) \\\n \t(&((struct atl_adapter *)adapter)->hw_cfg)\n \n+extern const struct rte_flow_ops atl_flow_ops;\n+\n+/*\n+ * RX/TX function prototypes\n+ */\n+void atl_rx_queue_release(void *rxq);\n+void atl_tx_queue_release(void *txq);\n+\n+int atl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,\n+\t\tuint16_t nb_rx_desc, unsigned int socket_id,\n+\t\tconst struct rte_eth_rxconf *rx_conf,\n+\t\tstruct rte_mempool *mb_pool);\n+\n+int atl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,\n+\t\tuint16_t nb_tx_desc, unsigned int socket_id,\n+\t\tconst struct rte_eth_txconf *tx_conf);\n+\n+uint32_t atl_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id);\n+\n+int atl_dev_rx_descriptor_done(void *rx_queue, uint16_t offset);\n+int atl_dev_rx_descriptor_status(void *rx_queue, uint16_t offset);\n+int atl_dev_tx_descriptor_status(void *tx_queue, uint16_t offset);\n+\n+int atl_dev_rx_queue_intr_enable(struct rte_eth_dev *eth_dev, uint16_t queue_id);\n+int atl_dev_rx_queue_intr_disable(struct rte_eth_dev *eth_dev, uint16_t queue_id);\n+\n+int atl_rx_init(struct rte_eth_dev *dev);\n+int atl_tx_init(struct rte_eth_dev *dev);\n+\n+int atl_start_queues(struct rte_eth_dev *dev);\n+int atl_stop_queues(struct rte_eth_dev *dev);\n+void atl_free_queues(struct rte_eth_dev *dev);\n+\n+int atl_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id);\n+int atl_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id);\n+\n+int atl_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);\n+int atl_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);\n+\n+void atl_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,\n+\tstruct rte_eth_rxq_info *qinfo);\n+\n+void atl_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,\n+\tstruct rte_eth_txq_info *qinfo);\n+\n+uint16_t atl_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,\n+\t\tuint16_t nb_pkts);\n+\n+uint16_t atl_recv_pkts_bulk_alloc(void *rx_queue, struct rte_mbuf **rx_pkts,\n+\t\t\t\t    uint16_t nb_pkts);\n+\n+uint16_t atl_recv_pkts_lro_single_alloc(void *rx_queue,\n+\t\tstruct rte_mbuf **rx_pkts, uint16_t nb_pkts);\n+uint16_t atl_recv_pkts_lro_bulk_alloc(void *rx_queue,\n+\t\tstruct rte_mbuf **rx_pkts, uint16_t nb_pkts);\n+\n+uint16_t atl_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,\n+\t\tuint16_t nb_pkts);\n+\n+uint16_t atl_xmit_pkts_simple(void *tx_queue, struct rte_mbuf **tx_pkts,\n+\t\tuint16_t nb_pkts);\n+\n+uint16_t atl_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,\n+\t\tuint16_t nb_pkts);\n+\n int\n atl_dev_led_control(struct rte_eth_dev *dev, int control);\n+\n+bool\n+is_atl_supported(struct rte_eth_dev *dev);\n+\n #endif /* _ATLANTIC_ETHDEV_H_ */\ndiff --git a/drivers/net/atlantic/atl_rxtx.c b/drivers/net/atlantic/atl_rxtx.c\nnew file mode 100644\nindex 000000000..6198f5dfe\n--- /dev/null\n+++ b/drivers/net/atlantic/atl_rxtx.c\n@@ -0,0 +1,758 @@\n+/* SPDX-License-Identifier: BSD-3-Clause\n+ * Copyright(c) 2018 Aquantia Corporation\n+ */\n+\n+#include <sys/queue.h>\n+\n+#include <stdio.h>\n+#include <stdlib.h>\n+#include <string.h>\n+#include <errno.h>\n+#include <stdint.h>\n+#include <stdarg.h>\n+#include <inttypes.h>\n+\n+#include <rte_interrupts.h>\n+#include <rte_byteorder.h>\n+#include <rte_common.h>\n+#include <rte_log.h>\n+#include <rte_debug.h>\n+#include <rte_pci.h>\n+#include <rte_memory.h>\n+#include <rte_memcpy.h>\n+#include <rte_memzone.h>\n+#include <rte_launch.h>\n+#include <rte_eal.h>\n+#include <rte_per_lcore.h>\n+#include <rte_lcore.h>\n+#include <rte_atomic.h>\n+#include <rte_branch_prediction.h>\n+#include <rte_mempool.h>\n+#include <rte_malloc.h>\n+#include <rte_mbuf.h>\n+#include <rte_ether.h>\n+#include <rte_ethdev_driver.h>\n+#include <rte_prefetch.h>\n+#include <rte_udp.h>\n+#include <rte_tcp.h>\n+#include <rte_sctp.h>\n+#include <rte_net.h>\n+#include <rte_string_fns.h>\n+#include \"atl_ethdev.h\"\n+#include \"atl_hw_regs.h\"\n+\n+#include \"atl_logs.h\"\n+#include \"hw_atl/hw_atl_llh.h\"\n+#include \"hw_atl/hw_atl_b0.h\"\n+#include \"hw_atl/hw_atl_b0_internal.h\"\n+\n+/**\n+ * Structure associated with each descriptor of the RX ring of a RX queue.\n+ */\n+struct atl_rx_entry {\n+\tstruct rte_mbuf *mbuf;\n+};\n+\n+/**\n+ * Structure associated with each RX queue.\n+ */\n+struct atl_rx_queue {\n+\tstruct rte_mempool\t*mb_pool;\n+\tstruct hw_atl_rxd_s\t*hw_ring;\n+\tuint64_t\t\thw_ring_phys_addr;\n+\tstruct atl_rx_entry\t*sw_ring;\n+\tuint16_t\t\tnb_rx_desc;\n+\tuint16_t\t\trx_tail;\n+\tuint16_t\t\tnb_rx_hold;\n+\tuint16_t\t\trx_free_thresh;\n+\tuint16_t\t\tqueue_id;\n+\tuint16_t\t\tport_id;\n+\tuint16_t\t\tbuff_size;\n+\tbool\t\t\tl3_csum_enabled;\n+\tbool\t\t\tl4_csum_enabled;\n+};\n+\n+inline static void\n+atl_reset_rx_queue(struct atl_rx_queue *rxq)\n+{\n+\tstruct hw_atl_rxd_s * rxd = NULL;\n+\tint i;\n+\n+\tPMD_INIT_FUNC_TRACE();\n+\n+\tfor (i = 0; i < rxq->nb_rx_desc; i++) {\n+\t\trxd = (struct hw_atl_rxd_s *)&rxq->hw_ring[i];\n+\t\trxd->buf_addr = 0;\n+\t\trxd->hdr_addr = 0;\n+\t}\n+\n+\trxq->rx_tail = 0;\n+}\n+\n+int\n+atl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,\n+\t\t   uint16_t nb_rx_desc, unsigned int socket_id,\n+\t\t   const struct rte_eth_rxconf *rx_conf,\n+\t\t   struct rte_mempool *mb_pool)\n+{\n+\tstruct atl_rx_queue *rxq;\n+\tconst struct rte_memzone *mz;\n+\n+\tPMD_INIT_FUNC_TRACE();\n+\n+\t/* make sure a valid number of descriptors have been requested */\n+\tif (nb_rx_desc < AQ_HW_MIN_RX_RING_SIZE || nb_rx_desc > AQ_HW_MAX_RX_RING_SIZE) {\n+\t\tPMD_INIT_LOG(ERR, \"Number of Rx descriptors must be \"\n+\t\t\"less than or equal to %d, \"\n+\t\t\"greater than or equal to %d\", AQ_HW_MAX_RX_RING_SIZE, AQ_HW_MIN_RX_RING_SIZE);\n+\t\treturn -EINVAL;\n+\t}\n+\n+\t/*\n+\t* if this queue existed already, free the associated memory. The\n+\t* queue cannot be reused in case we need to allocate memory on\n+\t* different socket than was previously used.\n+\t*/\n+\tif (dev->data->rx_queues[rx_queue_id] != NULL) {\n+\t\tatl_rx_queue_release(dev->data->rx_queues[rx_queue_id]);\n+\t\tdev->data->rx_queues[rx_queue_id] = NULL;\n+\t}\n+\n+\t/* allocate memory for the queue structure */\n+\trxq = rte_zmalloc_socket(\"atlantic Rx queue\", sizeof(*rxq), RTE_CACHE_LINE_SIZE, socket_id);\n+\tif (rxq == NULL) {\n+\t\tPMD_INIT_LOG(ERR, \"Cannot allocate queue structure\");\n+\t\treturn -ENOMEM;\n+\t}\n+\n+\t/* setup queue */\n+\trxq->mb_pool = mb_pool;\n+\trxq->nb_rx_desc = nb_rx_desc;\n+\trxq->port_id = dev->data->port_id;\n+\trxq->queue_id = rx_queue_id;\n+\trxq->rx_free_thresh = rx_conf->rx_free_thresh;\n+\n+\trxq->l3_csum_enabled = dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_IPV4_CKSUM;\n+\trxq->l4_csum_enabled = dev->data->dev_conf.rxmode.offloads &\n+\t\t(DEV_RX_OFFLOAD_UDP_CKSUM | DEV_RX_OFFLOAD_TCP_CKSUM);\n+\n+\t/* allocate memory for the software ring */\n+\trxq->sw_ring = rte_zmalloc_socket(\"atlantic sw rx ring\",\n+\t\t\t\t\t\tnb_rx_desc * sizeof(struct atl_rx_entry),\n+\t\t\t\t\t\tRTE_CACHE_LINE_SIZE, socket_id);\n+\tif (rxq->sw_ring == NULL) {\n+\t\tPMD_INIT_LOG(ERR, \"Cannot allocate software ring\");\n+\t\trte_free(rxq);\n+\t\treturn -ENOMEM;\n+\t}\n+\n+\t/*\n+\t * allocate memory for the hardware descriptor ring. A memzone large\n+\t * enough to hold the maximum ring size is requested to allow for\n+\t * resizing in later calls to the queue setup function.\n+\t */\n+\tmz = rte_eth_dma_zone_reserve(dev, \"rx hw_ring\", rx_queue_id,\n+\t\t\t\t      HW_ATL_B0_MAX_RXD * sizeof(struct hw_atl_rxd_s),\n+\t\t\t\t      128, socket_id);\n+\tif (mz == NULL) {\n+\t\tPMD_INIT_LOG(ERR, \"Cannot allocate hardware ring\");\n+\t\trte_free(rxq->sw_ring);\n+\t\trte_free(rxq);\n+\t\treturn -ENOMEM;\n+\t}\n+\trxq->hw_ring = mz->addr;\n+\trxq->hw_ring_phys_addr = mz->iova;\n+\n+\tatl_reset_rx_queue(rxq);\n+\n+\tdev->data->rx_queues[rx_queue_id] = rxq;\n+\treturn 0;\n+}\n+\n+int\n+atl_rx_init(struct rte_eth_dev *eth_dev)\n+{\n+\tstruct aq_hw_s *hw = ATL_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);\n+\tstruct aq_rss_parameters *rss_params = &hw->aq_nic_cfg->aq_rss;\n+\tstruct atl_rx_queue *rxq;\n+\tuint64_t base_addr = 0;\n+\tint i = 0;\n+\tint err = 0;\n+\n+\tPMD_INIT_FUNC_TRACE();\n+\n+\tfor (i =0 ; i < eth_dev->data->nb_rx_queues; i++) {\n+\t\trxq = eth_dev->data->rx_queues[i];\n+\t\tbase_addr = rxq->hw_ring_phys_addr;\n+\n+\t\t/* Take requested pool mbuf size and adapt descriptor buffer to best fit */\n+\t\tint buff_size = rte_pktmbuf_data_room_size(rxq->mb_pool) -\n+\t\t\t\tRTE_PKTMBUF_HEADROOM;\n+\n+\t\tbuff_size = RTE_ALIGN_FLOOR(buff_size, 1024);\n+\t\tif (buff_size > HW_ATL_B0_RXD_BUF_SIZE_MAX) {\n+\t\t\tPMD_INIT_LOG(WARNING, \"queue %d: mem pool buff size is larger than max supported\\n\", rxq->queue_id);\n+\t\t\tbuff_size = HW_ATL_B0_RXD_BUF_SIZE_MAX;\n+\t\t}\n+\t\tif (buff_size < 1024) {\n+\t\t\tPMD_INIT_LOG(ERR, \"queue %d: mem pool buff size is too small\\n\", rxq->queue_id);\n+\t\t\treturn -EINVAL;\n+\t\t}\n+\t\trxq->buff_size = buff_size;\n+\n+\t\terr = hw_atl_b0_hw_ring_rx_init(hw, base_addr, rxq->queue_id, rxq->nb_rx_desc, buff_size, 0, rxq->port_id);\n+\t}\n+\n+\tfor (i = rss_params->indirection_table_size; i--;)\n+\t\trss_params->indirection_table[i] = i & (eth_dev->data->nb_rx_queues - 1);\n+\thw_atl_b0_hw_rss_set(hw, rss_params);\n+\treturn err;\n+}\n+\n+static int\n+atl_alloc_rx_queue_mbufs(struct atl_rx_queue *rxq)\n+{\n+\tstruct atl_rx_entry *rx_entry = rxq->sw_ring;\n+\tstruct hw_atl_rxd_s *rxd;\n+\tuint64_t dma_addr = 0;\n+\tuint32_t i = 0;\n+\n+\tPMD_INIT_FUNC_TRACE();\n+\n+\t/* fill Rx ring */\n+\tfor (i = 0; i < rxq->nb_rx_desc; i++) {\n+\t\tstruct rte_mbuf *mbuf = rte_mbuf_raw_alloc(rxq->mb_pool);\n+\n+\t\tif (mbuf == NULL) {\n+\t\t\tPMD_INIT_LOG(ERR, \"mbuf alloca failed for rx queue %u\", (unsigned)rxq->queue_id);\n+\t\t\treturn -ENOMEM;\n+\t\t}\n+\n+\t\tmbuf->data_off = RTE_PKTMBUF_HEADROOM;\n+\t\tmbuf->port = rxq->port_id;\n+\n+\t\tdma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf));\n+\t\trxd = (struct hw_atl_rxd_s *)&rxq->hw_ring[i];\n+\t\trxd->buf_addr = dma_addr;\n+\t\trxd->hdr_addr = 0;\n+\t\trx_entry[i].mbuf = mbuf;\n+\t}\n+\n+\treturn 0;\n+}\n+\n+static void\n+atl_rx_queue_release_mbufs(struct atl_rx_queue *rxq)\n+{\n+\tint i;\n+\n+\tPMD_INIT_FUNC_TRACE();\n+\n+\tif (rxq->sw_ring != NULL) {\n+\t\tfor (i = 0; i < rxq->nb_rx_desc; i++) {\n+\t\t\tif (rxq->sw_ring[i].mbuf != NULL) {\n+\t\t\t\trte_pktmbuf_free_seg(rxq->sw_ring[i].mbuf);\n+\t\t\t\trxq->sw_ring[i].mbuf = NULL;\n+\t\t\t}\n+\t\t}\n+\t}\n+}\n+\n+int\n+atl_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)\n+{\n+\tstruct aq_hw_s *hw = ATL_DEV_PRIVATE_TO_HW(dev->data->dev_private);\n+\tstruct atl_rx_queue *rxq = NULL;\n+\n+\tPMD_INIT_FUNC_TRACE();\n+\n+\tif (rx_queue_id < dev->data->nb_rx_queues) {\n+\t\trxq = dev->data->rx_queues[rx_queue_id];\n+\n+\t\tif (atl_alloc_rx_queue_mbufs(rxq) != 0) {\n+\t\t\tPMD_INIT_LOG(ERR, \"Allocate mbufs for queue %d failed\", rx_queue_id);\n+\t\t\treturn -1;\n+\t\t}\n+\n+\t\thw_atl_b0_hw_ring_rx_start(hw, rx_queue_id);\n+\n+\t\trte_wmb();\n+\t\thw_atl_reg_rx_dma_desc_tail_ptr_set(hw, rxq->nb_rx_desc -1, rx_queue_id);\n+\t\tdev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED;\n+\t} else {\n+\t\treturn -1;\n+\t}\n+\n+\treturn 0;\n+}\n+\n+int\n+atl_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)\n+{\n+\tstruct aq_hw_s *hw = ATL_DEV_PRIVATE_TO_HW(dev->data->dev_private);\n+\tstruct atl_rx_queue *rxq = NULL;\n+\n+\tPMD_INIT_FUNC_TRACE();\n+\n+\tif (rx_queue_id < dev->data->nb_rx_queues) {\n+\t\trxq = dev->data->rx_queues[rx_queue_id];\n+\n+\t\thw_atl_b0_hw_ring_rx_stop(hw, rx_queue_id);\n+\n+\t\tatl_rx_queue_release_mbufs(rxq);\n+\t\tatl_reset_rx_queue(rxq);\n+\n+\t\tdev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;\n+\t} else {\n+\t\treturn -1;\n+\t}\n+\n+\treturn 0;\n+}\n+\n+void\n+atl_rx_queue_release(void *rx_queue)\n+{\n+\tPMD_INIT_FUNC_TRACE();\n+\n+\tif (rx_queue != NULL) {\n+\t\tstruct atl_rx_queue *rxq = (struct atl_rx_queue *)rx_queue;\n+\n+\t\tatl_rx_queue_release_mbufs(rxq);\n+\t\trte_free(rxq->sw_ring);\n+\t\trte_free(rxq);\n+\t}\n+}\n+\n+void\n+atl_free_queues(struct rte_eth_dev *dev)\n+{\n+\tunsigned i;\n+\n+\tPMD_INIT_FUNC_TRACE();\n+\n+\tfor (i = 0; i < dev->data->nb_rx_queues; i++) {\n+\t\tatl_rx_queue_release(dev->data->rx_queues[i]);\n+\t\tdev->data->rx_queues[i] = 0;\n+\t}\n+\tdev->data->nb_rx_queues = 0;\n+\n+}\n+\n+int\n+atl_start_queues(struct rte_eth_dev *dev)\n+{\n+\tint i;\n+\n+\tPMD_INIT_FUNC_TRACE();\n+\n+\tfor (i = 0; i < dev->data->nb_rx_queues; i++) {\n+\t\tif (atl_rx_queue_start(dev, i) != 0) {\n+\t\t\tPMD_DRV_LOG(ERR, \"Start Rx queue %d failed\", i);\n+\t\t\treturn -1;\n+\t\t}\n+\t}\n+\n+\treturn 0;\n+}\n+\n+int\n+atl_stop_queues(struct rte_eth_dev *dev)\n+{\n+\tint i;\n+\n+\tPMD_INIT_FUNC_TRACE();\n+\n+\tfor (i = 0; i < dev->data->nb_rx_queues; i++) {\n+\t\tif (atl_rx_queue_stop(dev, i) != 0) {\n+\t\t\tPMD_DRV_LOG(ERR, \"Stop Rx queue %d failed\", i);\n+\t\t\treturn -1;\n+\t\t}\n+\t}\n+\n+\treturn 0;\n+}\n+\n+void\n+atl_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id, struct rte_eth_rxq_info *qinfo)\n+{\n+\tstruct atl_rx_queue *rxq;\n+\n+\tPMD_INIT_FUNC_TRACE();\n+\n+\trxq = dev->data->rx_queues[queue_id];\n+\n+\tqinfo->mp = rxq->mb_pool;\n+\tqinfo->scattered_rx = dev->data->scattered_rx;\n+\tqinfo->nb_desc = rxq->nb_rx_desc;\n+}\n+\n+/* Return Rx queue avail count */\n+\n+uint32_t\n+atl_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)\n+{\n+\tstruct atl_rx_queue *rxq;\n+\n+\tPMD_INIT_FUNC_TRACE();\n+\n+\tif (rx_queue_id >= dev->data->nb_rx_queues) {\n+\t\tPMD_DRV_LOG(ERR, \"Invalid RX queue id=%d\", rx_queue_id);\n+\t\treturn 0;\n+\t}\n+\n+\trxq = dev->data->rx_queues[rx_queue_id];\n+\n+\tif (rxq == NULL)\n+\t\treturn 0;\n+\n+\treturn rxq->nb_rx_desc - rxq->nb_rx_hold;\n+}\n+\n+int\n+atl_dev_rx_descriptor_done(void *rx_queue, uint16_t offset)\n+{\n+\tstruct atl_rx_queue *rxq = rx_queue;\n+\tstruct hw_atl_rxd_wb_s *rxd;\n+\tuint32_t idx;\n+\n+\tPMD_INIT_FUNC_TRACE();\n+\n+\tif (unlikely(offset >= rxq->nb_rx_desc))\n+\t\treturn 0;\n+\n+\tidx = rxq->rx_tail + offset;\n+\n+\tif (idx >= rxq->nb_rx_desc)\n+\t\tidx -= rxq->nb_rx_desc;\n+\n+\trxd = (struct hw_atl_rxd_wb_s *)&rxq->hw_ring[idx];\n+\n+\treturn rxd->dd;\n+}\n+\n+int\n+atl_dev_rx_descriptor_status(void *rx_queue, uint16_t offset)\n+{\n+\tstruct atl_rx_queue *rxq = rx_queue;\n+\tstruct hw_atl_rxd_wb_s *rxd;\n+\tuint32_t idx;\n+\n+\tPMD_INIT_FUNC_TRACE();\n+\n+\tif (unlikely(offset >= rxq->nb_rx_desc))\n+\t\treturn -EINVAL;\n+\n+\tif (offset >= rxq->nb_rx_desc - rxq->nb_rx_hold)\n+\t\treturn RTE_ETH_RX_DESC_UNAVAIL;\n+\n+\tidx = rxq->rx_tail + offset;\n+\n+\tif (idx >= rxq->nb_rx_desc)\n+\t\tidx -= rxq->nb_rx_desc;\n+\n+\trxd = (struct hw_atl_rxd_wb_s *)&rxq->hw_ring[idx];\n+\n+\tif (rxd->dd)\n+\t\treturn RTE_ETH_RX_DESC_DONE;\n+\n+\treturn RTE_ETH_RX_DESC_AVAIL;\n+}\n+\n+static int\n+atl_rx_enable_intr(struct rte_eth_dev *dev, uint16_t queue_id, bool enable)\n+{\n+\tstruct aq_hw_s *hw = ATL_DEV_PRIVATE_TO_HW(dev->data->dev_private);\n+\tstruct atl_rx_queue *rxq;\n+\n+\tPMD_INIT_FUNC_TRACE();\n+\n+\tif (queue_id >= dev->data->nb_rx_queues) {\n+\t\tPMD_DRV_LOG(ERR, \"Invalid RX queue id=%d\", queue_id);\n+\t\treturn -EINVAL;\n+\t}\n+\n+\trxq = dev->data->rx_queues[queue_id];\n+\n+\tif (rxq == NULL)\n+\t\treturn 0;\n+\n+\t/* Mapping interrupt vector */\n+\thw_atl_itr_irq_map_en_rx_set(hw, enable, queue_id);\n+\n+\treturn 0;\n+}\n+\n+int\n+atl_dev_rx_queue_intr_enable(struct rte_eth_dev *eth_dev, uint16_t queue_id)\n+{\n+\treturn atl_rx_enable_intr(eth_dev, queue_id, true);\n+}\n+\n+int\n+atl_dev_rx_queue_intr_disable(struct rte_eth_dev *eth_dev, uint16_t queue_id)\n+{\n+\treturn atl_rx_enable_intr(eth_dev, queue_id, false);\n+}\n+\n+static uint64_t\n+atl_desc_to_offload_flags(struct atl_rx_queue *rxq, struct hw_atl_rxd_wb_s *rxd_wb)\n+{\n+\tuint64_t mbuf_flags = 0;\n+\n+\tPMD_INIT_FUNC_TRACE();\n+\n+\tif (rxq->l3_csum_enabled && ((rxd_wb->pkt_type & 0x3) == 0)) { //IPv4\n+\t\tif (rxd_wb->rx_stat & BIT(1)) { //IPv4 csum error\n+\t\t\tmbuf_flags |= PKT_RX_IP_CKSUM_BAD;\n+\t\t} else {\n+\t\t\tmbuf_flags |= PKT_RX_IP_CKSUM_GOOD;\n+\t\t}\n+\t} else {\n+\t\tmbuf_flags |= PKT_RX_IP_CKSUM_UNKNOWN;\n+\t}\n+\n+\tif (rxq->l4_csum_enabled && (rxd_wb->rx_stat & BIT(3))) { //CSUM calculated\n+\t\tif (rxd_wb->rx_stat & BIT(2)) {\n+\t\t\tmbuf_flags |= PKT_RX_L4_CKSUM_BAD;\n+\t\t} else {\n+\t\t\tmbuf_flags |= PKT_RX_L4_CKSUM_GOOD;\n+\t\t}\n+\t} else {\n+\t\tmbuf_flags |= PKT_RX_L4_CKSUM_UNKNOWN;\n+\t}\n+\t\t\n+\treturn mbuf_flags;\n+}\n+\n+static uint32_t\n+atl_desc_to_pkt_type(struct hw_atl_rxd_wb_s *rxd_wb)\n+{\n+\tuint32_t type = RTE_PTYPE_UNKNOWN;\n+\tuint16_t l2_l3_type = rxd_wb->pkt_type & 0x3;\n+\tuint16_t l4_type = (rxd_wb->pkt_type & 0x1C) >> 2;\n+\n+\tswitch (l2_l3_type) {\n+\tcase 0:\n+\t\ttype = RTE_PTYPE_L3_IPV4;\n+\t\tbreak;\n+\tcase 1:\n+\t\ttype = RTE_PTYPE_L3_IPV6;\n+\t\tbreak;\n+\tcase 2:\n+\t\ttype = RTE_PTYPE_L2_ETHER;\n+\t\tbreak;\n+\tcase 3:\n+\t\ttype = RTE_PTYPE_L2_ETHER_ARP;\n+\t\tbreak;\n+\t}\n+\n+\tswitch (l4_type) {\n+\tcase 0:\n+\t\ttype |= RTE_PTYPE_L4_TCP;\n+\t\tbreak;\n+\tcase 1:\n+\t\ttype |= RTE_PTYPE_L4_UDP;\n+\t\tbreak;\n+\tcase 2:\n+\t\ttype |= RTE_PTYPE_L4_SCTP;\n+\t\tbreak;\n+\tcase 3:\n+\t\ttype |= RTE_PTYPE_L4_ICMP;\n+\t\tbreak;\n+\t}\n+\n+\tif (rxd_wb->pkt_type & BIT(5))\n+\t\ttype |= RTE_PTYPE_L2_ETHER_VLAN;\n+\n+\treturn type;\t\t\n+}\n+\n+uint16_t\n+atl_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)\n+{\n+\tstruct atl_rx_queue *rxq = (struct atl_rx_queue *)rx_queue;\n+\tstruct rte_eth_dev *dev = &rte_eth_devices[rxq->port_id];\n+\tstruct atl_adapter *adapter = ATL_DEV_TO_ADAPTER(&rte_eth_devices[rxq->port_id]);\n+\tstruct aq_hw_s *hw = ATL_DEV_PRIVATE_TO_HW(adapter);\n+\tstruct atl_rx_entry *sw_ring = rxq->sw_ring;\n+\n+\tstruct rte_mbuf *new_mbuf;\n+\tstruct rte_mbuf *rx_mbuf, *rx_mbuf_prev, *rx_mbuf_first;\n+\tstruct atl_rx_entry *rx_entry;\n+\tuint16_t nb_rx = 0;\n+\tuint16_t nb_hold = 0;\n+\tstruct hw_atl_rxd_wb_s rxd_wb;\n+\tstruct hw_atl_rxd_s *rxd = NULL;\n+\tuint16_t tail = rxq->rx_tail;\n+\tuint64_t dma_addr;\n+\tuint16_t pkt_len = 0;\n+\n+\twhile (nb_rx < nb_pkts) {\n+\t\tuint16_t eop_tail = tail;\n+\n+\t\trxd = (struct hw_atl_rxd_s *)&rxq->hw_ring[tail];\n+\t\trxd_wb = *(struct hw_atl_rxd_wb_s *)rxd;\n+\n+\t\tif (!rxd_wb.dd) { /* RxD is not done */\n+\t\t\tbreak;\n+\t\t}\n+\n+\t\tPMD_RX_LOG(ERR, \"port_id=%u queue_id=%u tail=%u \"\n+\t\t\t   \"eop=0x%x pkt_len=%u hash=0x%x hash_type=0x%x\",\n+\t\t\t   (unsigned)rxq->port_id, (unsigned) rxq->queue_id,\n+\t\t\t   (unsigned)tail, (unsigned)rxd_wb.eop,\n+\t\t\t   (unsigned)rte_le_to_cpu_16(rxd_wb.pkt_len),\n+\t\t\trxd_wb.rss_hash, rxd_wb.rss_type);\n+\n+\t\t/* RxD is not done */\n+\t\tif (!rxd_wb.eop) {\n+\t\t\twhile(true) {\n+\t\t\t\tstruct hw_atl_rxd_wb_s *eop_rxwbd;\n+\n+\t\t\t\teop_tail = (eop_tail+1) % rxq->nb_rx_desc;\n+\t\t\t\teop_rxwbd = (struct hw_atl_rxd_wb_s*)&rxq->hw_ring[eop_tail];\n+\t\t\t\tif (!eop_rxwbd->dd) {\n+\t\t\t\t\t/* no EOP received yet */\n+\t\t\t\t\teop_tail = tail;\n+\t\t\t\t\tbreak;\n+\t\t\t\t}\n+\t\t\t\tif (eop_rxwbd->dd && eop_rxwbd->eop)\n+\t\t\t\t\tbreak;\n+\t\t\t}\n+\t\t\t/* No EOP in ring */\n+\t\t\tif (eop_tail == tail)\n+\t\t\t\tbreak;\n+\t\t}\n+\t\trx_mbuf_prev = rx_mbuf_first = NULL;\n+\n+\t\t/* Run through packet segments */\n+\t\twhile(true) {\n+\n+\t\t\tnew_mbuf = rte_mbuf_raw_alloc(rxq->mb_pool);\n+\t\t\tif (new_mbuf == NULL) {\n+\t\t\t\tPMD_RX_LOG(ERR, \"RX mbuf alloc failed port_id=%u \"\n+\t\t\t\t\t\t   \"queue_id=%u\", (unsigned) rxq->port_id,\n+\t\t\t\t\t\t   (unsigned) rxq->queue_id);\n+\t\t\t\tdev->data->rx_mbuf_alloc_failed++;\n+\t\t\t\tadapter->sw_stats.rx_nombuf++;\n+\t\t\t\tgoto err_stop;\n+\t\t\t}\n+\n+\t\t\tnb_hold++;\n+\t\t\trx_entry = &sw_ring[tail];\n+\n+\t\t\trx_mbuf = rx_entry->mbuf;\n+\t\t\trx_entry->mbuf = new_mbuf;\n+\t\t\tdma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(new_mbuf));\n+\n+\t\t\t/* setup RX descriptor */\n+\t\t\trxd->hdr_addr = 0;\n+\t\t\trxd->buf_addr = dma_addr;\n+\n+\t\t\t/*\n+\t\t\t * Initialize the returned mbuf.\n+\t\t\t * 1) setup generic mbuf fields:\n+\t\t\t *\t  - number of segments,\n+\t\t\t *\t  - next segment,\n+\t\t\t *\t  - packet length,\n+\t\t\t *\t  - RX port identifier.\n+\t\t\t * 2) integrate hardware offload data, if any:\n+\t\t\t *\t<  - RSS flag & hash,\n+\t\t\t *\t  - IP checksum flag,\n+\t\t\t *\t  - VLAN TCI, if any,\n+\t\t\t *\t  - error flags.\n+\t\t\t */\n+\t\t\tpkt_len = (uint16_t)rte_le_to_cpu_16(rxd_wb.pkt_len);\n+\t\t\trx_mbuf->data_off = RTE_PKTMBUF_HEADROOM;\n+\t\t\trte_prefetch1((char *)rx_mbuf->buf_addr + rx_mbuf->data_off);\n+\t\t\trx_mbuf->nb_segs = 0;\n+\t\t\trx_mbuf->next = NULL;\n+\t\t\trx_mbuf->pkt_len = pkt_len;\n+\t\t\trx_mbuf->data_len = pkt_len;\n+\t\t\tif (rxd_wb.eop) {\n+\t\t\t\tu16 remainder_len = pkt_len % rxq->buff_size;\n+\t\t\t\tif (!remainder_len)\n+\t\t\t\t\tremainder_len = rxq->buff_size;\n+\t\t\t\trx_mbuf->data_len = remainder_len;\n+\t\t\t} else {\n+\t\t\t\trx_mbuf->data_len = pkt_len > rxq->buff_size ?\n+\t\t\t\t\t\t\t\t\t\trxq->buff_size : pkt_len;\n+\t\t\t}\n+\t\t\trx_mbuf->port = rxq->port_id;\n+\n+\t\t\trx_mbuf->hash.rss = rxd_wb.rss_hash;\n+\n+\t\t\trx_mbuf->vlan_tci = rxd_wb.vlan;\n+\n+\t\t\t//hlen_type_rss = rte_le_to_cpu_32(rxd_wb->type);\n+\t\t\t//pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(rxq, hlen_type_rss);\n+\n+\t\t\trx_mbuf->ol_flags = atl_desc_to_offload_flags(rxq, &rxd_wb);\n+\t\t\trx_mbuf->packet_type = atl_desc_to_pkt_type(&rxd_wb);\n+\n+\t\t\tif (!rx_mbuf_first)\n+\t\t\t\trx_mbuf_first = rx_mbuf;\n+\t\t\trx_mbuf_first->nb_segs++;\n+\n+\t\t\tif (rx_mbuf_prev)\n+\t\t\t\trx_mbuf_prev->next = rx_mbuf;\n+\t\t\trx_mbuf_prev = rx_mbuf;\n+\n+\t\t\ttail = (tail+1) % rxq->nb_rx_desc;\n+\t\t\t/* Prefetch next mbufs */\n+\t\t\trte_prefetch0(sw_ring[tail].mbuf);\n+\t\t\tif ((tail & 0x3) == 0) {\n+\t\t\t\trte_prefetch0(&sw_ring[tail]);\n+\t\t\t\trte_prefetch0(&sw_ring[tail]);\n+\t\t\t}\n+\n+\t\t\t/* filled mbuf_first */\n+\t\t\tif (rxd_wb.eop) {\n+\t\t\t\tbreak;\n+\t\t\t}\n+\t\t\trxd = (struct hw_atl_rxd_s *)&rxq->hw_ring[tail];\n+\t\t\trxd_wb = *(struct hw_atl_rxd_wb_s *)rxd;\n+\t\t};\n+\n+\t\t/*\n+\t\t * Store the mbuf address into the next entry of the array\n+\t\t * of returned packets.\n+\t\t */\n+\t\trx_pkts[nb_rx++] = rx_mbuf_first;\n+\t\tadapter->sw_stats.q_ipackets[rxq->queue_id]++;\n+\t\tadapter->sw_stats.q_ibytes[rxq->queue_id] += rx_mbuf_first->pkt_len;\n+\n+\t\tPMD_RX_LOG(ERR, \"add mbuf segs=%d pkt_len=%d\", rx_mbuf_first->nb_segs, rx_mbuf_first->pkt_len);\n+\t}\n+\n+err_stop:\n+\n+\trxq->rx_tail = tail;\n+\n+\t/*\n+\t* If the number of free RX descriptors is greater than the RX free\n+\t* threshold of the queue, advance the Receive Descriptor Tail (RDT)\n+\t* register.\n+\t* Update the RDT with the value of the last processed RX descriptor\n+\t* minus 1, to guarantee that the RDT register is never equal to the\n+\t* RDH register, which creates a \"full\" ring situtation from the\n+\t* hardware point of view...\n+\t*/\n+\tnb_hold = (uint16_t) (nb_hold + rxq->nb_rx_hold);\n+\tif (nb_hold > rxq->rx_free_thresh) {\n+\t\tPMD_RX_LOG(ERR, \"port_id=%u queue_id=%u rx_tail=%u \"\n+\t\t\t\"nb_hold=%u nb_rx=%u\",\n+\t\t\t(unsigned) rxq->port_id, (unsigned) rxq->queue_id,\n+\t\t\t(unsigned) tail, (unsigned) nb_hold, (unsigned) nb_rx);\n+\t\ttail = (uint16_t) ((tail == 0) ? (rxq->nb_rx_desc - 1) : (tail - 1));\n+\n+\t\thw_atl_reg_rx_dma_desc_tail_ptr_set(hw, tail, rxq->queue_id);\n+\n+\t\tnb_hold = 0;\n+\t}\n+\n+\trxq->nb_rx_hold = nb_hold;\n+\n+\treturn nb_rx;\n+}\ndiff --git a/drivers/net/atlantic/meson.build b/drivers/net/atlantic/meson.build\nindex 19fa41cd3..42821f35a 100644\n--- a/drivers/net/atlantic/meson.build\n+++ b/drivers/net/atlantic/meson.build\n@@ -4,8 +4,14 @@\n #subdir('hw_atl')\n \n sources = files(\n+\t'atl_rxtx.c',\n \t'atl_ethdev.c',\n+\t'atl_hw_regs.c',\n \t'rte_pmd_atlantic.c',\n+\t'hw_atl/hw_atl_b0.c',\n+\t'hw_atl/hw_atl_llh.c',\n+\t'hw_atl/hw_atl_utils_fw2x.c',\n+\t'hw_atl/hw_atl_utils.c',\n )\n \n deps += ['hash', 'eal']\n",
    "prefixes": [
        "20/21"
    ]
}