get:
Show a patch.

patch:
Update a patch.

put:
Update a patch.

GET /api/patches/42019/?format=api
HTTP 200 OK
Allow: GET, PUT, PATCH, HEAD, OPTIONS
Content-Type: application/json
Vary: Accept

{
    "id": 42019,
    "url": "http://patches.dpdk.org/api/patches/42019/?format=api",
    "web_url": "http://patches.dpdk.org/project/dpdk/patch/20180630235049.62610-8-yong.liu@intel.com/",
    "project": {
        "id": 1,
        "url": "http://patches.dpdk.org/api/projects/1/?format=api",
        "name": "DPDK",
        "link_name": "dpdk",
        "list_id": "dev.dpdk.org",
        "list_email": "dev@dpdk.org",
        "web_url": "http://core.dpdk.org",
        "scm_url": "git://dpdk.org/dpdk",
        "webscm_url": "http://git.dpdk.org/dpdk",
        "list_archive_url": "https://inbox.dpdk.org/dev",
        "list_archive_url_format": "https://inbox.dpdk.org/dev/{}",
        "commit_url_format": ""
    },
    "msgid": "<20180630235049.62610-8-yong.liu@intel.com>",
    "list_archive_url": "https://inbox.dpdk.org/dev/20180630235049.62610-8-yong.liu@intel.com",
    "date": "2018-06-30T23:50:47",
    "name": "[v4,7/9] net/virtio: support in-order Rx and Tx",
    "commit_ref": null,
    "pull_url": null,
    "state": "superseded",
    "archived": true,
    "hash": "e4312ead5af3cd6574afdae29948e874c9607b65",
    "submitter": {
        "id": 17,
        "url": "http://patches.dpdk.org/api/people/17/?format=api",
        "name": "Marvin Liu",
        "email": "yong.liu@intel.com"
    },
    "delegate": {
        "id": 2642,
        "url": "http://patches.dpdk.org/api/users/2642/?format=api",
        "username": "mcoquelin",
        "first_name": "Maxime",
        "last_name": "Coquelin",
        "email": "maxime.coquelin@redhat.com"
    },
    "mbox": "http://patches.dpdk.org/project/dpdk/patch/20180630235049.62610-8-yong.liu@intel.com/mbox/",
    "series": [
        {
            "id": 336,
            "url": "http://patches.dpdk.org/api/series/336/?format=api",
            "web_url": "http://patches.dpdk.org/project/dpdk/list/?series=336",
            "date": "2018-06-30T23:50:40",
            "name": "support in-order feature",
            "version": 4,
            "mbox": "http://patches.dpdk.org/series/336/mbox/"
        }
    ],
    "comments": "http://patches.dpdk.org/api/patches/42019/comments/",
    "check": "success",
    "checks": "http://patches.dpdk.org/api/patches/42019/checks/",
    "tags": {},
    "related": [],
    "headers": {
        "Return-Path": "<dev-bounces@dpdk.org>",
        "X-Original-To": "patchwork@dpdk.org",
        "Delivered-To": "patchwork@dpdk.org",
        "Received": [
            "from [92.243.14.124] (localhost [127.0.0.1])\n\tby dpdk.org (Postfix) with ESMTP id 0A4CF1BECB;\n\tSat, 30 Jun 2018 18:05:23 +0200 (CEST)",
            "from mga14.intel.com (mga14.intel.com [192.55.52.115])\n\tby dpdk.org (Postfix) with ESMTP id 3DFCA1BEB5\n\tfor <dev@dpdk.org>; Sat, 30 Jun 2018 18:05:15 +0200 (CEST)",
            "from fmsmga004.fm.intel.com ([10.253.24.48])\n\tby fmsmga103.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384;\n\t30 Jun 2018 09:05:14 -0700",
            "from dpdk-test32.sh.intel.com ([10.67.119.193])\n\tby fmsmga004.fm.intel.com with ESMTP; 30 Jun 2018 09:05:13 -0700"
        ],
        "X-Amp-Result": "SKIPPED(no attachment in message)",
        "X-Amp-File-Uploaded": "False",
        "X-ExtLoop1": "1",
        "X-IronPort-AV": "E=Sophos;i=\"5.51,291,1526367600\"; d=\"scan'208\";a=\"67643555\"",
        "From": "Marvin Liu <yong.liu@intel.com>",
        "To": "maxime.coquelin@redhat.com,\n\ttiwei.bie@intel.com",
        "Cc": "zhihong.wang@intel.com,\n\tdev@dpdk.org,\n\tMarvin Liu <yong.liu@intel.com>",
        "Date": "Sun,  1 Jul 2018 07:50:47 +0800",
        "Message-Id": "<20180630235049.62610-8-yong.liu@intel.com>",
        "X-Mailer": "git-send-email 2.17.0",
        "In-Reply-To": "<20180630235049.62610-1-yong.liu@intel.com>",
        "References": "<20180630235049.62610-1-yong.liu@intel.com>",
        "Subject": "[dpdk-dev] [PATCH v4 7/9] net/virtio: support in-order Rx and Tx",
        "X-BeenThere": "dev@dpdk.org",
        "X-Mailman-Version": "2.1.15",
        "Precedence": "list",
        "List-Id": "DPDK patches and discussions <dev.dpdk.org>",
        "List-Unsubscribe": "<https://mails.dpdk.org/options/dev>,\n\t<mailto:dev-request@dpdk.org?subject=unsubscribe>",
        "List-Archive": "<http://mails.dpdk.org/archives/dev/>",
        "List-Post": "<mailto:dev@dpdk.org>",
        "List-Help": "<mailto:dev-request@dpdk.org?subject=help>",
        "List-Subscribe": "<https://mails.dpdk.org/listinfo/dev>,\n\t<mailto:dev-request@dpdk.org?subject=subscribe>",
        "Errors-To": "dev-bounces@dpdk.org",
        "Sender": "\"dev\" <dev-bounces@dpdk.org>"
    },
    "content": "IN_ORDER Rx function depends on merge-able feature. Descriptors\nallocation and free will be done in bulk.\n\nVirtio dequeue logic:\n    dequeue_burst_rx(burst mbufs)\n    for (each mbuf b) {\n            if (b need merge) {\n                    merge remained mbufs\n                    add merged mbuf to return mbufs list\n            } else {\n                    add mbuf to return mbufs list\n            }\n    }\n    if (last mbuf c need merge) {\n            dequeue_burst_rx(required mbufs)\n            merge last mbuf c\n    }\n    refill_avail_ring_bulk()\n    update_avail_ring()\n    return mbufs list\n\nIN_ORDER Tx function can support offloading features. Packets which\nmatched \"can_push\" option will be handled by simple xmit function. Those\npackets can't match \"can_push\" will be handled by original xmit function\nwith in-order flag.\n\nVirtio enqueue logic:\n    xmit_cleanup(used descs)\n    for (each xmit mbuf b) {\n            if (b can inorder xmit) {\n                    add mbuf b to inorder burst list\n                    continue\n            } else {\n                    xmit inorder burst list\n                    xmit mbuf b by original function\n            }\n    }\n    if (inorder burst list not empty) {\n            xmit inorder burst list\n    }\n    update_avail_ring()\n\nSigned-off-by: Marvin Liu <yong.liu@intel.com>\nReviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>",
    "diff": "diff --git a/drivers/net/virtio/virtio_ethdev.h b/drivers/net/virtio/virtio_ethdev.h\nindex bb40064ea..cd8070248 100644\n--- a/drivers/net/virtio/virtio_ethdev.h\n+++ b/drivers/net/virtio/virtio_ethdev.h\n@@ -83,9 +83,15 @@ uint16_t virtio_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,\n uint16_t virtio_recv_mergeable_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,\n \t\tuint16_t nb_pkts);\n \n+uint16_t virtio_recv_mergeable_pkts_inorder(void *rx_queue,\n+\t\tstruct rte_mbuf **rx_pkts, uint16_t nb_pkts);\n+\n uint16_t virtio_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,\n \t\tuint16_t nb_pkts);\n \n+uint16_t virtio_xmit_pkts_inorder(void *tx_queue, struct rte_mbuf **tx_pkts,\n+\t\tuint16_t nb_pkts);\n+\n uint16_t virtio_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,\n \t\tuint16_t nb_pkts);\n \ndiff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c\nindex e9b1b496e..6394071b8 100644\n--- a/drivers/net/virtio/virtio_rxtx.c\n+++ b/drivers/net/virtio/virtio_rxtx.c\n@@ -122,6 +122,44 @@ virtqueue_dequeue_burst_rx(struct virtqueue *vq, struct rte_mbuf **rx_pkts,\n \treturn i;\n }\n \n+static uint16_t\n+virtqueue_dequeue_rx_inorder(struct virtqueue *vq,\n+\t\t\tstruct rte_mbuf **rx_pkts,\n+\t\t\tuint32_t *len,\n+\t\t\tuint16_t num)\n+{\n+\tstruct vring_used_elem *uep;\n+\tstruct rte_mbuf *cookie;\n+\tuint16_t used_idx = 0;\n+\tuint16_t i;\n+\n+\tif (unlikely(num == 0))\n+\t\treturn 0;\n+\n+\tfor (i = 0; i < num; i++) {\n+\t\tused_idx = vq->vq_used_cons_idx & (vq->vq_nentries - 1);\n+\t\t/* Desc idx same as used idx */\n+\t\tuep = &vq->vq_ring.used->ring[used_idx];\n+\t\tlen[i] = uep->len;\n+\t\tcookie = (struct rte_mbuf *)vq->vq_descx[used_idx].cookie;\n+\n+\t\tif (unlikely(cookie == NULL)) {\n+\t\t\tPMD_DRV_LOG(ERR, \"vring descriptor with no mbuf cookie at %u\",\n+\t\t\t\tvq->vq_used_cons_idx);\n+\t\t\tbreak;\n+\t\t}\n+\n+\t\trte_prefetch0(cookie);\n+\t\trte_packet_prefetch(rte_pktmbuf_mtod(cookie, void *));\n+\t\trx_pkts[i]  = cookie;\n+\t\tvq->vq_used_cons_idx++;\n+\t\tvq->vq_descx[used_idx].cookie = NULL;\n+\t}\n+\n+\tvq_ring_free_inorder(vq, used_idx, i);\n+\treturn i;\n+}\n+\n #ifndef DEFAULT_TX_FREE_THRESH\n #define DEFAULT_TX_FREE_THRESH 32\n #endif\n@@ -150,6 +188,83 @@ virtio_xmit_cleanup(struct virtqueue *vq, uint16_t num)\n \t}\n }\n \n+/* Cleanup from completed inorder transmits. */\n+static void\n+virtio_xmit_cleanup_inorder(struct virtqueue *vq, uint16_t num)\n+{\n+\tuint16_t i, used_idx, desc_idx, last_idx;\n+\tint16_t free_cnt = 0;\n+\tstruct vq_desc_extra *dxp = NULL;\n+\n+\tif (unlikely(num == 0))\n+\t\treturn;\n+\n+\tfor (i = 0; i < num; i++) {\n+\t\tstruct vring_used_elem *uep;\n+\n+\t\tused_idx = vq->vq_used_cons_idx & (vq->vq_nentries - 1);\n+\t\tuep = &vq->vq_ring.used->ring[used_idx];\n+\t\tdesc_idx = (uint16_t)uep->id;\n+\n+\t\tdxp = &vq->vq_descx[desc_idx];\n+\t\tvq->vq_used_cons_idx++;\n+\n+\t\tif (dxp->cookie != NULL) {\n+\t\t\trte_pktmbuf_free(dxp->cookie);\n+\t\t\tdxp->cookie = NULL;\n+\t\t}\n+\t}\n+\n+\tlast_idx = desc_idx + dxp->ndescs - 1;\n+\tfree_cnt = last_idx - vq->vq_desc_tail_idx;\n+\tif (free_cnt <= 0)\n+\t\tfree_cnt += vq->vq_nentries;\n+\n+\tvq_ring_free_inorder(vq, last_idx, free_cnt);\n+}\n+\n+static inline int\n+virtqueue_enqueue_refill_inorder(struct virtqueue *vq,\n+\t\t\tstruct rte_mbuf **cookies,\n+\t\t\tuint16_t num)\n+{\n+\tstruct vq_desc_extra *dxp;\n+\tstruct virtio_hw *hw = vq->hw;\n+\tstruct vring_desc *start_dp;\n+\tuint16_t head_idx, idx, i = 0;\n+\n+\tif (unlikely(vq->vq_free_cnt == 0))\n+\t\treturn -ENOSPC;\n+\tif (unlikely(vq->vq_free_cnt < num))\n+\t\treturn -EMSGSIZE;\n+\n+\thead_idx = vq->vq_desc_head_idx & (vq->vq_nentries - 1);\n+\tstart_dp = vq->vq_ring.desc;\n+\n+\twhile (i < num) {\n+\t\tidx = head_idx & (vq->vq_nentries - 1);\n+\t\tdxp = &vq->vq_descx[idx];\n+\t\tdxp->cookie = (void *)cookies[i];\n+\t\tdxp->ndescs = 1;\n+\n+\t\tstart_dp[idx].addr =\n+\t\t\t\tVIRTIO_MBUF_ADDR(cookies[i], vq) +\n+\t\t\t\tRTE_PKTMBUF_HEADROOM - hw->vtnet_hdr_size;\n+\t\tstart_dp[idx].len =\n+\t\t\t\tcookies[i]->buf_len -\n+\t\t\t\tRTE_PKTMBUF_HEADROOM +\n+\t\t\t\thw->vtnet_hdr_size;\n+\t\tstart_dp[idx].flags =  VRING_DESC_F_WRITE;\n+\n+\t\tvq_update_avail_ring(vq, idx);\n+\t\thead_idx++;\n+\t\ti++;\n+\t}\n+\n+\tvq->vq_desc_head_idx += num;\n+\tvq->vq_free_cnt = (uint16_t)(vq->vq_free_cnt - num);\n+\treturn 0;\n+}\n \n static inline int\n virtqueue_enqueue_recv_refill(struct virtqueue *vq, struct rte_mbuf *cookie)\n@@ -295,9 +410,65 @@ virtqueue_xmit_offload(struct virtio_net_hdr *hdr,\n \t}\n }\n \n+static inline void\n+virtqueue_enqueue_xmit_inorder(struct virtnet_tx *txvq,\n+\t\t\tstruct rte_mbuf **cookies,\n+\t\t\tuint16_t num)\n+{\n+\tstruct vq_desc_extra *dxp;\n+\tstruct virtqueue *vq = txvq->vq;\n+\tstruct vring_desc *start_dp;\n+\tstruct virtio_net_hdr *hdr;\n+\tuint16_t idx;\n+\tuint16_t head_size = vq->hw->vtnet_hdr_size;\n+\tint offload;\n+\tuint16_t i = 0;\n+\n+\tidx = vq->vq_desc_head_idx;\n+\tstart_dp = vq->vq_ring.desc;\n+\n+\toffload = tx_offload_enabled(vq->hw);\n+\n+\twhile (i < num) {\n+\t\tidx = idx & (vq->vq_nentries - 1);\n+\t\tdxp = &vq->vq_descx[idx];\n+\t\tdxp->cookie = (void *)cookies[i];\n+\t\tdxp->ndescs = 1;\n+\n+\t\thdr = (struct virtio_net_hdr *)\n+\t\t\trte_pktmbuf_prepend(cookies[i], head_size);\n+\t\tcookies[i]->pkt_len -= head_size;\n+\n+\t\t/* if offload disabled, it is not zeroed below, do it now */\n+\t\tif (offload == 0) {\n+\t\t\tASSIGN_UNLESS_EQUAL(hdr->csum_start, 0);\n+\t\t\tASSIGN_UNLESS_EQUAL(hdr->csum_offset, 0);\n+\t\t\tASSIGN_UNLESS_EQUAL(hdr->flags, 0);\n+\t\t\tASSIGN_UNLESS_EQUAL(hdr->gso_type, 0);\n+\t\t\tASSIGN_UNLESS_EQUAL(hdr->gso_size, 0);\n+\t\t\tASSIGN_UNLESS_EQUAL(hdr->hdr_len, 0);\n+\t\t}\n+\n+\t\tvirtqueue_xmit_offload(hdr, cookies[i], offload);\n+\n+\t\tstart_dp[idx].addr  = VIRTIO_MBUF_DATA_DMA_ADDR(cookies[i], vq);\n+\t\tstart_dp[idx].len   = cookies[i]->data_len;\n+\t\tstart_dp[idx].flags = 0;\n+\n+\t\tvq_update_avail_ring(vq, idx);\n+\n+\t\tidx++;\n+\t\ti++;\n+\t};\n+\n+\tvq->vq_free_cnt = (uint16_t)(vq->vq_free_cnt - num);\n+\tvq->vq_desc_head_idx = idx & (vq->vq_nentries - 1);\n+}\n+\n static inline void\n virtqueue_enqueue_xmit(struct virtnet_tx *txvq, struct rte_mbuf *cookie,\n-\t\t       uint16_t needed, int use_indirect, int can_push)\n+\t\t\tuint16_t needed, int use_indirect, int can_push,\n+\t\t\tint in_order)\n {\n \tstruct virtio_tx_region *txr = txvq->virtio_net_hdr_mz->addr;\n \tstruct vq_desc_extra *dxp;\n@@ -310,6 +481,7 @@ virtqueue_enqueue_xmit(struct virtnet_tx *txvq, struct rte_mbuf *cookie,\n \tint offload;\n \n \toffload = tx_offload_enabled(vq->hw);\n+\n \thead_idx = vq->vq_desc_head_idx;\n \tidx = head_idx;\n \tdxp = &vq->vq_descx[idx];\n@@ -326,6 +498,7 @@ virtqueue_enqueue_xmit(struct virtnet_tx *txvq, struct rte_mbuf *cookie,\n \t\t * which is wrong. Below subtract restores correct pkt size.\n \t\t */\n \t\tcookie->pkt_len -= head_size;\n+\n \t\t/* if offload disabled, it is not zeroed below, do it now */\n \t\tif (offload == 0) {\n \t\t\tASSIGN_UNLESS_EQUAL(hdr->csum_start, 0);\n@@ -376,11 +549,15 @@ virtqueue_enqueue_xmit(struct virtnet_tx *txvq, struct rte_mbuf *cookie,\n \tif (use_indirect)\n \t\tidx = vq->vq_ring.desc[head_idx].next;\n \n-\tvq->vq_desc_head_idx = idx;\n-\tif (vq->vq_desc_head_idx == VQ_RING_DESC_CHAIN_END)\n-\t\tvq->vq_desc_tail_idx = idx;\n \tvq->vq_free_cnt = (uint16_t)(vq->vq_free_cnt - needed);\n+\n+\tvq->vq_desc_head_idx = idx;\n \tvq_update_avail_ring(vq, head_idx);\n+\n+\tif (!in_order) {\n+\t\tif (vq->vq_desc_head_idx == VQ_RING_DESC_CHAIN_END)\n+\t\t\tvq->vq_desc_tail_idx = idx;\n+\t}\n }\n \n void\n@@ -435,7 +612,7 @@ virtio_dev_rx_queue_setup_finish(struct rte_eth_dev *dev, uint16_t queue_idx)\n \tstruct virtnet_rx *rxvq = &vq->rxq;\n \tstruct rte_mbuf *m;\n \tuint16_t desc_idx;\n-\tint error, nbufs;\n+\tint error, nbufs, i;\n \n \tPMD_INIT_FUNC_TRACE();\n \n@@ -465,6 +642,25 @@ virtio_dev_rx_queue_setup_finish(struct rte_eth_dev *dev, uint16_t queue_idx)\n \t\t\tvirtio_rxq_rearm_vec(rxvq);\n \t\t\tnbufs += RTE_VIRTIO_VPMD_RX_REARM_THRESH;\n \t\t}\n+\t} else if (hw->use_inorder_rx) {\n+\t\tif ((!virtqueue_full(vq))) {\n+\t\t\tuint16_t free_cnt = vq->vq_free_cnt;\n+\t\t\tstruct rte_mbuf *pkts[free_cnt];\n+\n+\t\t\tif (!rte_pktmbuf_alloc_bulk(rxvq->mpool, pkts,\n+\t\t\t\tfree_cnt)) {\n+\t\t\t\terror = virtqueue_enqueue_refill_inorder(vq,\n+\t\t\t\t\t\tpkts,\n+\t\t\t\t\t\tfree_cnt);\n+\t\t\t\tif (unlikely(error)) {\n+\t\t\t\t\tfor (i = 0; i < free_cnt; i++)\n+\t\t\t\t\t\trte_pktmbuf_free(pkts[i]);\n+\t\t\t\t}\n+\t\t\t}\n+\n+\t\t\tnbufs += free_cnt;\n+\t\t\tvq_update_avail_idx(vq);\n+\t\t}\n \t} else {\n \t\twhile (!virtqueue_full(vq)) {\n \t\t\tm = rte_mbuf_raw_alloc(rxvq->mpool);\n@@ -574,6 +770,8 @@ virtio_dev_tx_queue_setup_finish(struct rte_eth_dev *dev,\n \t\tfor (desc_idx = mid_idx; desc_idx < vq->vq_nentries;\n \t\t     desc_idx++)\n \t\t\tvq->vq_ring.avail->ring[desc_idx] = desc_idx;\n+\t} else if (hw->use_inorder_tx) {\n+\t\tvq->vq_ring.desc[vq->vq_nentries - 1].next = 0;\n \t}\n \n \tVIRTQUEUE_DUMP(vq);\n@@ -590,6 +788,19 @@ virtio_discard_rxbuf(struct virtqueue *vq, struct rte_mbuf *m)\n \t * successful since it was just dequeued.\n \t */\n \terror = virtqueue_enqueue_recv_refill(vq, m);\n+\n+\tif (unlikely(error)) {\n+\t\tRTE_LOG(ERR, PMD, \"cannot requeue discarded mbuf\");\n+\t\trte_pktmbuf_free(m);\n+\t}\n+}\n+\n+static void\n+virtio_discard_rxbuf_inorder(struct virtqueue *vq, struct rte_mbuf *m)\n+{\n+\tint error;\n+\n+\terror = virtqueue_enqueue_refill_inorder(vq, &m, 1);\n \tif (unlikely(error)) {\n \t\tRTE_LOG(ERR, PMD, \"cannot requeue discarded mbuf\");\n \t\trte_pktmbuf_free(m);\n@@ -826,6 +1037,194 @@ virtio_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)\n \treturn nb_rx;\n }\n \n+uint16_t\n+virtio_recv_mergeable_pkts_inorder(void *rx_queue,\n+\t\t\tstruct rte_mbuf **rx_pkts,\n+\t\t\tuint16_t nb_pkts)\n+{\n+\tstruct virtnet_rx *rxvq = rx_queue;\n+\tstruct virtqueue *vq = rxvq->vq;\n+\tstruct virtio_hw *hw = vq->hw;\n+\tstruct rte_mbuf *rxm;\n+\tstruct rte_mbuf *prev;\n+\tuint16_t nb_used, num, nb_rx;\n+\tuint32_t len[VIRTIO_MBUF_BURST_SZ];\n+\tstruct rte_mbuf *rcv_pkts[VIRTIO_MBUF_BURST_SZ];\n+\tint error;\n+\tuint32_t nb_enqueued;\n+\tuint32_t seg_num;\n+\tuint32_t seg_res;\n+\tuint32_t hdr_size;\n+\tint32_t i;\n+\tint offload;\n+\n+\tnb_rx = 0;\n+\tif (unlikely(hw->started == 0))\n+\t\treturn nb_rx;\n+\n+\tnb_used = VIRTQUEUE_NUSED(vq);\n+\tnb_used = RTE_MIN(nb_used, nb_pkts);\n+\tnb_used = RTE_MIN(nb_used, VIRTIO_MBUF_BURST_SZ);\n+\n+\tvirtio_rmb();\n+\n+\tPMD_RX_LOG(DEBUG, \"used:%d\", nb_used);\n+\n+\tnb_enqueued = 0;\n+\tseg_num = 1;\n+\tseg_res = 0;\n+\thdr_size = hw->vtnet_hdr_size;\n+\toffload = rx_offload_enabled(hw);\n+\n+\tnum = virtqueue_dequeue_rx_inorder(vq, rcv_pkts, len, nb_used);\n+\n+\tfor (i = 0; i < num; i++) {\n+\t\tstruct virtio_net_hdr_mrg_rxbuf *header;\n+\n+\t\tPMD_RX_LOG(DEBUG, \"dequeue:%d\", num);\n+\t\tPMD_RX_LOG(DEBUG, \"packet len:%d\", len[i]);\n+\n+\t\trxm = rcv_pkts[i];\n+\n+\t\tif (unlikely(len[i] < hdr_size + ETHER_HDR_LEN)) {\n+\t\t\tPMD_RX_LOG(ERR, \"Packet drop\");\n+\t\t\tnb_enqueued++;\n+\t\t\tvirtio_discard_rxbuf_inorder(vq, rxm);\n+\t\t\trxvq->stats.errors++;\n+\t\t\tcontinue;\n+\t\t}\n+\n+\t\theader = (struct virtio_net_hdr_mrg_rxbuf *)\n+\t\t\t ((char *)rxm->buf_addr + RTE_PKTMBUF_HEADROOM\n+\t\t\t - hdr_size);\n+\t\tseg_num = header->num_buffers;\n+\n+\t\tif (seg_num == 0)\n+\t\t\tseg_num = 1;\n+\n+\t\trxm->data_off = RTE_PKTMBUF_HEADROOM;\n+\t\trxm->nb_segs = seg_num;\n+\t\trxm->ol_flags = 0;\n+\t\trxm->vlan_tci = 0;\n+\t\trxm->pkt_len = (uint32_t)(len[i] - hdr_size);\n+\t\trxm->data_len = (uint16_t)(len[i] - hdr_size);\n+\n+\t\trxm->port = rxvq->port_id;\n+\n+\t\trx_pkts[nb_rx] = rxm;\n+\t\tprev = rxm;\n+\n+\t\tif (offload && virtio_rx_offload(rxm, &header->hdr) < 0) {\n+\t\t\tvirtio_discard_rxbuf_inorder(vq, rxm);\n+\t\t\trxvq->stats.errors++;\n+\t\t\tcontinue;\n+\t\t}\n+\n+\t\tif (hw->vlan_strip)\n+\t\t\trte_vlan_strip(rx_pkts[nb_rx]);\n+\n+\t\tseg_res = seg_num - 1;\n+\n+\t\t/* Merge remaining segments */\n+\t\twhile (seg_res != 0 && i < (num - 1)) {\n+\t\t\ti++;\n+\n+\t\t\trxm = rcv_pkts[i];\n+\t\t\trxm->data_off = RTE_PKTMBUF_HEADROOM - hdr_size;\n+\t\t\trxm->pkt_len = (uint32_t)(len[i]);\n+\t\t\trxm->data_len = (uint16_t)(len[i]);\n+\n+\t\t\trx_pkts[nb_rx]->pkt_len += (uint32_t)(len[i]);\n+\t\t\trx_pkts[nb_rx]->data_len += (uint16_t)(len[i]);\n+\n+\t\t\tif (prev)\n+\t\t\t\tprev->next = rxm;\n+\n+\t\t\tprev = rxm;\n+\t\t\tseg_res -= 1;\n+\t\t}\n+\n+\t\tif (!seg_res) {\n+\t\t\tvirtio_rx_stats_updated(rxvq, rx_pkts[nb_rx]);\n+\t\t\tnb_rx++;\n+\t\t}\n+\t}\n+\n+\t/* Last packet still need merge segments */\n+\twhile (seg_res != 0) {\n+\t\tuint16_t rcv_cnt = RTE_MIN((uint16_t)seg_res,\n+\t\t\t\t\tVIRTIO_MBUF_BURST_SZ);\n+\n+\t\tprev = rcv_pkts[nb_rx];\n+\t\tif (likely(VIRTQUEUE_NUSED(vq) >= rcv_cnt)) {\n+\t\t\tnum = virtqueue_dequeue_rx_inorder(vq, rcv_pkts, len,\n+\t\t\t\t\t\t\t   rcv_cnt);\n+\t\t\tuint16_t extra_idx = 0;\n+\n+\t\t\trcv_cnt = num;\n+\t\t\twhile (extra_idx < rcv_cnt) {\n+\t\t\t\trxm = rcv_pkts[extra_idx];\n+\t\t\t\trxm->data_off =\n+\t\t\t\t\tRTE_PKTMBUF_HEADROOM - hdr_size;\n+\t\t\t\trxm->pkt_len = (uint32_t)(len[extra_idx]);\n+\t\t\t\trxm->data_len = (uint16_t)(len[extra_idx]);\n+\t\t\t\tprev->next = rxm;\n+\t\t\t\tprev = rxm;\n+\t\t\t\trx_pkts[nb_rx]->pkt_len += len[extra_idx];\n+\t\t\t\trx_pkts[nb_rx]->data_len += len[extra_idx];\n+\t\t\t\textra_idx += 1;\n+\t\t\t};\n+\t\t\tseg_res -= rcv_cnt;\n+\n+\t\t\tif (!seg_res) {\n+\t\t\t\tvirtio_rx_stats_updated(rxvq, rx_pkts[nb_rx]);\n+\t\t\t\tnb_rx++;\n+\t\t\t}\n+\t\t} else {\n+\t\t\tPMD_RX_LOG(ERR,\n+\t\t\t\t\t\"No enough segments for packet.\");\n+\t\t\tvirtio_discard_rxbuf_inorder(vq, prev);\n+\t\t\trxvq->stats.errors++;\n+\t\t\tbreak;\n+\t\t}\n+\t}\n+\n+\trxvq->stats.packets += nb_rx;\n+\n+\t/* Allocate new mbuf for the used descriptor */\n+\n+\tif (likely(!virtqueue_full(vq))) {\n+\t\t/* free_cnt may include mrg descs */\n+\t\tuint16_t free_cnt = vq->vq_free_cnt;\n+\t\tstruct rte_mbuf *new_pkts[free_cnt];\n+\n+\t\tif (!rte_pktmbuf_alloc_bulk(rxvq->mpool, new_pkts, free_cnt)) {\n+\t\t\terror = virtqueue_enqueue_refill_inorder(vq, new_pkts,\n+\t\t\t\t\tfree_cnt);\n+\t\t\tif (unlikely(error)) {\n+\t\t\t\tfor (i = 0; i < free_cnt; i++)\n+\t\t\t\t\trte_pktmbuf_free(new_pkts[i]);\n+\t\t\t}\n+\t\t\tnb_enqueued += free_cnt;\n+\t\t} else {\n+\t\t\tstruct rte_eth_dev *dev =\n+\t\t\t\t&rte_eth_devices[rxvq->port_id];\n+\t\t\tdev->data->rx_mbuf_alloc_failed += free_cnt;\n+\t\t}\n+\t}\n+\n+\tif (likely(nb_enqueued)) {\n+\t\tvq_update_avail_idx(vq);\n+\n+\t\tif (unlikely(virtqueue_kick_prepare(vq))) {\n+\t\t\tvirtqueue_notify(vq);\n+\t\t\tPMD_RX_LOG(DEBUG, \"Notified\");\n+\t\t}\n+\t}\n+\n+\treturn nb_rx;\n+}\n+\n uint16_t\n virtio_recv_mergeable_pkts(void *rx_queue,\n \t\t\tstruct rte_mbuf **rx_pkts,\n@@ -1073,7 +1472,8 @@ virtio_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)\n \t\t}\n \n \t\t/* Enqueue Packet buffers */\n-\t\tvirtqueue_enqueue_xmit(txvq, txm, slots, use_indirect, can_push);\n+\t\tvirtqueue_enqueue_xmit(txvq, txm, slots, use_indirect,\n+\t\t\tcan_push, 0);\n \n \t\ttxvq->stats.bytes += txm->pkt_len;\n \t\tvirtio_update_packet_stats(&txvq->stats, txm);\n@@ -1092,3 +1492,116 @@ virtio_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)\n \n \treturn nb_tx;\n }\n+\n+uint16_t\n+virtio_xmit_pkts_inorder(void *tx_queue,\n+\t\t\tstruct rte_mbuf **tx_pkts,\n+\t\t\tuint16_t nb_pkts)\n+{\n+\tstruct virtnet_tx *txvq = tx_queue;\n+\tstruct virtqueue *vq = txvq->vq;\n+\tstruct virtio_hw *hw = vq->hw;\n+\tuint16_t hdr_size = hw->vtnet_hdr_size;\n+\tuint16_t nb_used, nb_avail, nb_tx = 0, nb_inorder_pkts = 0;\n+\tstruct rte_mbuf *inorder_pkts[nb_pkts];\n+\tint error;\n+\n+\tif (unlikely(hw->started == 0 && tx_pkts != hw->inject_pkts))\n+\t\treturn nb_tx;\n+\n+\tif (unlikely(nb_pkts < 1))\n+\t\treturn nb_pkts;\n+\n+\tVIRTQUEUE_DUMP(vq);\n+\tPMD_TX_LOG(DEBUG, \"%d packets to xmit\", nb_pkts);\n+\tnb_used = VIRTQUEUE_NUSED(vq);\n+\n+\tvirtio_rmb();\n+\tif (likely(nb_used > vq->vq_nentries - vq->vq_free_thresh))\n+\t\tvirtio_xmit_cleanup_inorder(vq, nb_used);\n+\n+\tif (unlikely(!vq->vq_free_cnt))\n+\t\tvirtio_xmit_cleanup_inorder(vq, nb_used);\n+\n+\tnb_avail = RTE_MIN(vq->vq_free_cnt, nb_pkts);\n+\n+\tfor (nb_tx = 0; nb_tx < nb_avail; nb_tx++) {\n+\t\tstruct rte_mbuf *txm = tx_pkts[nb_tx];\n+\t\tint slots, need;\n+\n+\t\t/* Do VLAN tag insertion */\n+\t\tif (unlikely(txm->ol_flags & PKT_TX_VLAN_PKT)) {\n+\t\t\terror = rte_vlan_insert(&txm);\n+\t\t\tif (unlikely(error)) {\n+\t\t\t\trte_pktmbuf_free(txm);\n+\t\t\t\tcontinue;\n+\t\t\t}\n+\t\t}\n+\n+\t\t/* optimize ring usage */\n+\t\tif ((vtpci_with_feature(hw, VIRTIO_F_ANY_LAYOUT) ||\n+\t\t     vtpci_with_feature(hw, VIRTIO_F_VERSION_1)) &&\n+\t\t     rte_mbuf_refcnt_read(txm) == 1 &&\n+\t\t     RTE_MBUF_DIRECT(txm) &&\n+\t\t     txm->nb_segs == 1 &&\n+\t\t     rte_pktmbuf_headroom(txm) >= hdr_size &&\n+\t\t     rte_is_aligned(rte_pktmbuf_mtod(txm, char *),\n+\t\t\t\t__alignof__(struct virtio_net_hdr_mrg_rxbuf))) {\n+\t\t\tinorder_pkts[nb_inorder_pkts] = txm;\n+\t\t\tnb_inorder_pkts++;\n+\n+\t\t\ttxvq->stats.bytes += txm->pkt_len;\n+\t\t\tvirtio_update_packet_stats(&txvq->stats, txm);\n+\t\t\tcontinue;\n+\t\t}\n+\n+\t\tif (nb_inorder_pkts) {\n+\t\t\tvirtqueue_enqueue_xmit_inorder(txvq, inorder_pkts,\n+\t\t\t\t\t\t\tnb_inorder_pkts);\n+\t\t\tnb_inorder_pkts = 0;\n+\t\t}\n+\n+\t\tslots = txm->nb_segs + 1;\n+\t\tneed = slots - vq->vq_free_cnt;\n+\t\tif (unlikely(need > 0)) {\n+\t\t\tnb_used = VIRTQUEUE_NUSED(vq);\n+\t\t\tvirtio_rmb();\n+\t\t\tneed = RTE_MIN(need, (int)nb_used);\n+\n+\t\t\tvirtio_xmit_cleanup_inorder(vq, need);\n+\n+\t\t\tneed = slots - vq->vq_free_cnt;\n+\n+\t\t\tif (unlikely(need > 0)) {\n+\t\t\t\tPMD_TX_LOG(ERR,\n+\t\t\t\t\t\"No free tx descriptors to transmit\");\n+\t\t\t\tbreak;\n+\t\t\t}\n+\t\t}\n+\t\t/* Enqueue Packet buffers */\n+\t\tvirtqueue_enqueue_xmit(txvq, txm, slots, 0, 0, 1);\n+\n+\t\ttxvq->stats.bytes += txm->pkt_len;\n+\t\tvirtio_update_packet_stats(&txvq->stats, txm);\n+\t}\n+\n+\t/* Transmit all inorder packets */\n+\tif (nb_inorder_pkts)\n+\t\tvirtqueue_enqueue_xmit_inorder(txvq, inorder_pkts,\n+\t\t\t\t\t\tnb_inorder_pkts);\n+\n+\ttxvq->stats.packets += nb_tx;\n+\n+\tif (likely(nb_tx)) {\n+\t\tvq_update_avail_idx(vq);\n+\n+\t\tif (unlikely(virtqueue_kick_prepare(vq))) {\n+\t\t\tvirtqueue_notify(vq);\n+\t\t\tPMD_TX_LOG(DEBUG, \"Notified backend after xmit\");\n+\t\t}\n+\t}\n+\n+\tVIRTQUEUE_DUMP(vq);\n+\n+\treturn nb_tx;\n+}\n",
    "prefixes": [
        "v4",
        "7/9"
    ]
}