get:
Show a patch.

patch:
Update a patch.

put:
Update a patch.

GET /api/patches/91463/?format=api
HTTP 200 OK
Allow: GET, PUT, PATCH, HEAD, OPTIONS
Content-Type: application/json
Vary: Accept

{
    "id": 91463,
    "url": "https://patches.dpdk.org/api/patches/91463/?format=api",
    "web_url": "https://patches.dpdk.org/project/dpdk/patch/20210414141404.9486-1-xuemingl@nvidia.com/",
    "project": {
        "id": 1,
        "url": "https://patches.dpdk.org/api/projects/1/?format=api",
        "name": "DPDK",
        "link_name": "dpdk",
        "list_id": "dev.dpdk.org",
        "list_email": "dev@dpdk.org",
        "web_url": "http://core.dpdk.org",
        "scm_url": "git://dpdk.org/dpdk",
        "webscm_url": "http://git.dpdk.org/dpdk",
        "list_archive_url": "https://inbox.dpdk.org/dev",
        "list_archive_url_format": "https://inbox.dpdk.org/dev/{}",
        "commit_url_format": ""
    },
    "msgid": "<20210414141404.9486-1-xuemingl@nvidia.com>",
    "list_archive_url": "https://inbox.dpdk.org/dev/20210414141404.9486-1-xuemingl@nvidia.com",
    "date": "2021-04-14T14:14:04",
    "name": "[v1] net/virtio: fix vectorized Rx queue stuck",
    "commit_ref": null,
    "pull_url": null,
    "state": "accepted",
    "archived": true,
    "hash": "24f5cca695adfc1a86a3c03cc5902f7a6033ae24",
    "submitter": {
        "id": 1904,
        "url": "https://patches.dpdk.org/api/people/1904/?format=api",
        "name": "Xueming Li",
        "email": "xuemingl@nvidia.com"
    },
    "delegate": {
        "id": 2642,
        "url": "https://patches.dpdk.org/api/users/2642/?format=api",
        "username": "mcoquelin",
        "first_name": "Maxime",
        "last_name": "Coquelin",
        "email": "maxime.coquelin@redhat.com"
    },
    "mbox": "https://patches.dpdk.org/project/dpdk/patch/20210414141404.9486-1-xuemingl@nvidia.com/mbox/",
    "series": [
        {
            "id": 16374,
            "url": "https://patches.dpdk.org/api/series/16374/?format=api",
            "web_url": "https://patches.dpdk.org/project/dpdk/list/?series=16374",
            "date": "2021-04-14T14:14:04",
            "name": "[v1] net/virtio: fix vectorized Rx queue stuck",
            "version": 1,
            "mbox": "https://patches.dpdk.org/series/16374/mbox/"
        }
    ],
    "comments": "https://patches.dpdk.org/api/patches/91463/comments/",
    "check": "fail",
    "checks": "https://patches.dpdk.org/api/patches/91463/checks/",
    "tags": {},
    "related": [],
    "headers": {
        "Return-Path": "<dev-bounces@dpdk.org>",
        "X-Original-To": "patchwork@inbox.dpdk.org",
        "Delivered-To": "patchwork@inbox.dpdk.org",
        "Received": [
            "from mails.dpdk.org (mails.dpdk.org [217.70.189.124])\n\tby inbox.dpdk.org (Postfix) with ESMTP id 41D04A0562;\n\tWed, 14 Apr 2021 16:14:46 +0200 (CEST)",
            "from [217.70.189.124] (localhost [127.0.0.1])\n\tby mails.dpdk.org (Postfix) with ESMTP id F291D161B47;\n\tWed, 14 Apr 2021 16:14:45 +0200 (CEST)",
            "from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129])\n by mails.dpdk.org (Postfix) with ESMTP id E2B5C161B41\n for <dev@dpdk.org>; Wed, 14 Apr 2021 16:14:43 +0200 (CEST)",
            "from Internal Mail-Server by MTLPINE1 (envelope-from\n xuemingl@nvidia.com) with SMTP; 14 Apr 2021 17:14:41 +0300",
            "from nvidia.com ([172.27.8.56])\n by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 13EEEacw027244;\n Wed, 14 Apr 2021 17:14:37 +0300"
        ],
        "From": "Xueming Li <xuemingl@nvidia.com>",
        "To": "",
        "Cc": "\".Xueming Li\" <xuemingl@nvidia.com>, dev@dpdk.org, =?utf-8?b?6LCi5Y2O?=\n\t=?utf-8?b?5LyfICjmraTml7bmraTliLvvvIk=?= <huawei.xhw@alibaba-inc.com>,\n  jerin.jacob@caviumnetworks.com, drc@linux.vnet.ibm.com, stable@dpdk.org,\n Maxime Coquelin <maxime.coquelin@redhat.com>,\n Chenbo Xia <chenbo.xia@intel.com>, Jerin Jacob <jerinj@marvell.com>,\n Ruifeng Wang <ruifeng.wang@arm.com>,\n Bruce Richardson <bruce.richardson@intel.com>,\n Konstantin Ananyev <konstantin.ananyev@intel.com>,\n Jianfeng Tan <jianfeng.tan@intel.com>, Huawei Xie <huawei.xie@intel.com>,\n Jianbo Liu <jianbo.liu@linaro.org>, Yuanhan Liu <yuanhan.liu@linux.intel.com>",
        "Date": "Wed, 14 Apr 2021 22:14:04 +0800",
        "Message-Id": "<20210414141404.9486-1-xuemingl@nvidia.com>",
        "X-Mailer": "git-send-email 2.25.1",
        "In-Reply-To": "<20210414042631.7041-1-xuemingl@nvidia.com>",
        "References": "<20210414042631.7041-1-xuemingl@nvidia.com>",
        "MIME-Version": "1.0",
        "Content-Type": "text/plain; charset=UTF-8",
        "Content-Transfer-Encoding": "8bit",
        "Subject": "[dpdk-dev] [PATCH v1] net/virtio: fix vectorized Rx queue stuck",
        "X-BeenThere": "dev@dpdk.org",
        "X-Mailman-Version": "2.1.29",
        "Precedence": "list",
        "List-Id": "DPDK patches and discussions <dev.dpdk.org>",
        "List-Unsubscribe": "<https://mails.dpdk.org/options/dev>,\n <mailto:dev-request@dpdk.org?subject=unsubscribe>",
        "List-Archive": "<http://mails.dpdk.org/archives/dev/>",
        "List-Post": "<mailto:dev@dpdk.org>",
        "List-Help": "<mailto:dev-request@dpdk.org?subject=help>",
        "List-Subscribe": "<https://mails.dpdk.org/listinfo/dev>,\n <mailto:dev-request@dpdk.org?subject=subscribe>",
        "Errors-To": "dev-bounces@dpdk.org",
        "Sender": "\"dev\" <dev-bounces@dpdk.org>"
    },
    "content": "From: \".Xueming Li\" <xuemingl@nvidia.com>\n\nWhen Rx queue worked in vectorized mode and rxd <= 512, under traffic of\nhigh PPS rate, testpmd often start and receive packets of rxd without\nfurther growth.\n\nTestpmd started with rxq flush which tried to rx MAX_PKT_BURST(512)\npackets and drop. When Rx burst size >= Rx queue size, all descriptors\nin used queue consumed without rearm, device can't receive more packets.\nThe next Rx burst returned at once since no used descriptors found,\nrearm logic was skipped, rx vq kept in starving state.\n\nTo avoid rx vq starving, this patch always check the available queue,\nrearm if needed even no used descriptor reported by device.\n\nFixes: fc3d66212fed (\"virtio: add vector Rx\")\nCc: 谢华伟(此时此刻) <huawei.xhw@alibaba-inc.com>\nFixes: 2d7c37194ee4 (\"net/virtio: add NEON based Rx handler\")\nCc: jerin.jacob@caviumnetworks.com\nFixes: 52b5a707e6ca (\"net/virtio: add Altivec Rx\")\nCc: drc@linux.vnet.ibm.com\nCc: stable@dpdk.org\n\nSigned-off-by: Xueming Li <xuemingl@nvidia.com>\n---\n drivers/net/virtio/virtio_rxtx_simple_altivec.c | 12 ++++++------\n drivers/net/virtio/virtio_rxtx_simple_neon.c    | 12 ++++++------\n drivers/net/virtio/virtio_rxtx_simple_sse.c     | 12 ++++++------\n 3 files changed, 18 insertions(+), 18 deletions(-)",
    "diff": "diff --git a/drivers/net/virtio/virtio_rxtx_simple_altivec.c b/drivers/net/virtio/virtio_rxtx_simple_altivec.c\nindex 62e5100a48..7534974ef4 100644\n--- a/drivers/net/virtio/virtio_rxtx_simple_altivec.c\n+++ b/drivers/net/virtio/virtio_rxtx_simple_altivec.c\n@@ -85,6 +85,12 @@ virtio_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,\n \tif (unlikely(nb_pkts < RTE_VIRTIO_DESC_PER_LOOP))\n \t\treturn 0;\n \n+\tif (vq->vq_free_cnt >= RTE_VIRTIO_VPMD_RX_REARM_THRESH) {\n+\t\tvirtio_rxq_rearm_vec(rxvq);\n+\t\tif (unlikely(virtqueue_kick_prepare(vq)))\n+\t\t\tvirtqueue_notify(vq);\n+\t}\n+\n \tnb_used = virtqueue_nused(vq);\n \n \trte_compiler_barrier();\n@@ -102,12 +108,6 @@ virtio_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,\n \n \trte_prefetch0(rused);\n \n-\tif (vq->vq_free_cnt >= RTE_VIRTIO_VPMD_RX_REARM_THRESH) {\n-\t\tvirtio_rxq_rearm_vec(rxvq);\n-\t\tif (unlikely(virtqueue_kick_prepare(vq)))\n-\t\t\tvirtqueue_notify(vq);\n-\t}\n-\n \tnb_total = nb_used;\n \tref_rx_pkts = rx_pkts;\n \tfor (nb_pkts_received = 0;\ndiff --git a/drivers/net/virtio/virtio_rxtx_simple_neon.c b/drivers/net/virtio/virtio_rxtx_simple_neon.c\nindex c8e4b13a02..7fd92d1b0c 100644\n--- a/drivers/net/virtio/virtio_rxtx_simple_neon.c\n+++ b/drivers/net/virtio/virtio_rxtx_simple_neon.c\n@@ -84,6 +84,12 @@ virtio_recv_pkts_vec(void *rx_queue,\n \tif (unlikely(nb_pkts < RTE_VIRTIO_DESC_PER_LOOP))\n \t\treturn 0;\n \n+\tif (vq->vq_free_cnt >= RTE_VIRTIO_VPMD_RX_REARM_THRESH) {\n+\t\tvirtio_rxq_rearm_vec(rxvq);\n+\t\tif (unlikely(virtqueue_kick_prepare(vq)))\n+\t\t\tvirtqueue_notify(vq);\n+\t}\n+\n \t/* virtqueue_nused has a load-acquire or rte_io_rmb inside */\n \tnb_used = virtqueue_nused(vq);\n \n@@ -100,12 +106,6 @@ virtio_recv_pkts_vec(void *rx_queue,\n \n \trte_prefetch_non_temporal(rused);\n \n-\tif (vq->vq_free_cnt >= RTE_VIRTIO_VPMD_RX_REARM_THRESH) {\n-\t\tvirtio_rxq_rearm_vec(rxvq);\n-\t\tif (unlikely(virtqueue_kick_prepare(vq)))\n-\t\t\tvirtqueue_notify(vq);\n-\t}\n-\n \tnb_total = nb_used;\n \tref_rx_pkts = rx_pkts;\n \tfor (nb_pkts_received = 0;\ndiff --git a/drivers/net/virtio/virtio_rxtx_simple_sse.c b/drivers/net/virtio/virtio_rxtx_simple_sse.c\nindex ff4eba33d6..7577f5e86d 100644\n--- a/drivers/net/virtio/virtio_rxtx_simple_sse.c\n+++ b/drivers/net/virtio/virtio_rxtx_simple_sse.c\n@@ -85,6 +85,12 @@ virtio_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,\n \tif (unlikely(nb_pkts < RTE_VIRTIO_DESC_PER_LOOP))\n \t\treturn 0;\n \n+\tif (vq->vq_free_cnt >= RTE_VIRTIO_VPMD_RX_REARM_THRESH) {\n+\t\tvirtio_rxq_rearm_vec(rxvq);\n+\t\tif (unlikely(virtqueue_kick_prepare(vq)))\n+\t\t\tvirtqueue_notify(vq);\n+\t}\n+\n \tnb_used = virtqueue_nused(vq);\n \n \tif (unlikely(nb_used == 0))\n@@ -100,12 +106,6 @@ virtio_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,\n \n \trte_prefetch0(rused);\n \n-\tif (vq->vq_free_cnt >= RTE_VIRTIO_VPMD_RX_REARM_THRESH) {\n-\t\tvirtio_rxq_rearm_vec(rxvq);\n-\t\tif (unlikely(virtqueue_kick_prepare(vq)))\n-\t\t\tvirtqueue_notify(vq);\n-\t}\n-\n \tnb_total = nb_used;\n \tref_rx_pkts = rx_pkts;\n \tfor (nb_pkts_received = 0;\n",
    "prefixes": [
        "v1"
    ]
}