Patch Detail
get:
Show a patch.
patch:
Update a patch.
put:
Update a patch.
GET /api/patches/69214/?format=api
http://patches.dpdk.org/api/patches/69214/?format=api", "web_url": "http://patches.dpdk.org/project/dpdk/patch/20200424092445.44693-7-yong.liu@intel.com/", "project": { "id": 1, "url": "http://patches.dpdk.org/api/projects/1/?format=api", "name": "DPDK", "link_name": "dpdk", "list_id": "dev.dpdk.org", "list_email": "dev@dpdk.org", "web_url": "http://core.dpdk.org", "scm_url": "git://dpdk.org/dpdk", "webscm_url": "http://git.dpdk.org/dpdk", "list_archive_url": "https://inbox.dpdk.org/dev", "list_archive_url_format": "https://inbox.dpdk.org/dev/{}", "commit_url_format": "" }, "msgid": "<20200424092445.44693-7-yong.liu@intel.com>", "list_archive_url": "https://inbox.dpdk.org/dev/20200424092445.44693-7-yong.liu@intel.com", "date": "2020-04-24T09:24:42", "name": "[v9,6/9] net/virtio: reuse packed ring xmit functions", "commit_ref": null, "pull_url": null, "state": "superseded", "archived": true, "hash": "0f8bd4dcdac67128282d786cfe0bc201029e0a0f", "submitter": { "id": 17, "url": "http://patches.dpdk.org/api/people/17/?format=api", "name": "Marvin Liu", "email": "yong.liu@intel.com" }, "delegate": { "id": 2642, "url": "http://patches.dpdk.org/api/users/2642/?format=api", "username": "mcoquelin", "first_name": "Maxime", "last_name": "Coquelin", "email": "maxime.coquelin@redhat.com" }, "mbox": "http://patches.dpdk.org/project/dpdk/patch/20200424092445.44693-7-yong.liu@intel.com/mbox/", "series": [ { "id": 9605, "url": "http://patches.dpdk.org/api/series/9605/?format=api", "web_url": "http://patches.dpdk.org/project/dpdk/list/?series=9605", "date": "2020-04-24T09:24:36", "name": "add packed ring vectorized path", "version": 9, "mbox": "http://patches.dpdk.org/series/9605/mbox/" } ], "comments": "http://patches.dpdk.org/api/patches/69214/comments/", "check": "fail", "checks": "http://patches.dpdk.org/api/patches/69214/checks/", "tags": {}, "related": [], "headers": { "Return-Path": "<dev-bounces@dpdk.org>", "X-Original-To": "patchwork@inbox.dpdk.org", "Delivered-To": "patchwork@inbox.dpdk.org", "Received": [ "from dpdk.org (dpdk.org [92.243.14.124])\n\tby inbox.dpdk.org (Postfix) with ESMTP id C17A1A00C4;\n\tFri, 24 Apr 2020 03:50:25 +0200 (CEST)", "from [92.243.14.124] (localhost [127.0.0.1])\n\tby dpdk.org (Postfix) with ESMTP id 524971D161;\n\tFri, 24 Apr 2020 03:49:43 +0200 (CEST)", "from mga06.intel.com (mga06.intel.com [134.134.136.31])\n by dpdk.org (Postfix) with ESMTP id 68E221C2A0\n for <dev@dpdk.org>; Fri, 24 Apr 2020 03:49:39 +0200 (CEST)", "from orsmga007.jf.intel.com ([10.7.209.58])\n by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;\n 23 Apr 2020 18:49:39 -0700", "from npg-dpdk-virtual-marvin-dev.sh.intel.com ([10.67.119.56])\n by orsmga007.jf.intel.com with ESMTP; 23 Apr 2020 18:49:37 -0700" ], "IronPort-SDR": [ "\n JP3kXaGYwDgl7nFRslY0T588wP1UITiqqatR6r9dLzedSuSxuh6ed1LvnVtL9oMPILbMmJp0xl\n NtkwZPFGN3cw==", "\n bqVv5B3rFwcO9jw6tS8UCP1jb+IZRZlaS+L6NRXlAu5L4HQQIwEO7nhxJJVQqUWbVK5CbqSdHe\n q+5w8yobKZ0w==" ], "X-Amp-Result": "SKIPPED(no attachment in message)", "X-Amp-File-Uploaded": "False", "X-ExtLoop1": "1", "X-IronPort-AV": "E=Sophos;i=\"5.73,310,1583222400\"; d=\"scan'208\";a=\"245083925\"", "From": "Marvin Liu <yong.liu@intel.com>", "To": "maxime.coquelin@redhat.com, xiaolong.ye@intel.com, zhihong.wang@intel.com", "Cc": "dev@dpdk.org, harry.van.haaren@intel.com, Marvin Liu <yong.liu@intel.com>", "Date": "Fri, 24 Apr 2020 17:24:42 +0800", "Message-Id": "<20200424092445.44693-7-yong.liu@intel.com>", "X-Mailer": "git-send-email 2.17.1", "In-Reply-To": "<20200424092445.44693-1-yong.liu@intel.com>", "References": "<20200313174230.74661-1-yong.liu@intel.com>\n <20200424092445.44693-1-yong.liu@intel.com>", "Subject": "[dpdk-dev] [PATCH v9 6/9] net/virtio: reuse packed ring xmit\n\tfunctions", "X-BeenThere": "dev@dpdk.org", "X-Mailman-Version": "2.1.15", "Precedence": "list", "List-Id": "DPDK patches and discussions <dev.dpdk.org>", "List-Unsubscribe": "<https://mails.dpdk.org/options/dev>,\n <mailto:dev-request@dpdk.org?subject=unsubscribe>", "List-Archive": "<http://mails.dpdk.org/archives/dev/>", "List-Post": "<mailto:dev@dpdk.org>", "List-Help": "<mailto:dev-request@dpdk.org?subject=help>", "List-Subscribe": "<https://mails.dpdk.org/listinfo/dev>,\n <mailto:dev-request@dpdk.org?subject=subscribe>", "Errors-To": "dev-bounces@dpdk.org", "Sender": "\"dev\" <dev-bounces@dpdk.org>" }, "content": "Move xmit offload and packed ring xmit enqueue function to header file.\nThese functions will be reused by packed ring vectorized Tx function.\n\nSigned-off-by: Marvin Liu <yong.liu@intel.com>", "diff": "diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c\nindex c9b6e7844..cf18fe564 100644\n--- a/drivers/net/virtio/virtio_rxtx.c\n+++ b/drivers/net/virtio/virtio_rxtx.c\n@@ -264,10 +264,6 @@ virtqueue_dequeue_rx_inorder(struct virtqueue *vq,\n \treturn i;\n }\n \n-#ifndef DEFAULT_TX_FREE_THRESH\n-#define DEFAULT_TX_FREE_THRESH 32\n-#endif\n-\n static void\n virtio_xmit_cleanup_inorder_packed(struct virtqueue *vq, int num)\n {\n@@ -562,68 +558,7 @@ virtio_tso_fix_cksum(struct rte_mbuf *m)\n }\n \n \n-/* avoid write operation when necessary, to lessen cache issues */\n-#define ASSIGN_UNLESS_EQUAL(var, val) do {\t\\\n-\tif ((var) != (val))\t\t\t\\\n-\t\t(var) = (val);\t\t\t\\\n-} while (0)\n-\n-#define virtqueue_clear_net_hdr(_hdr) do {\t\t\\\n-\tASSIGN_UNLESS_EQUAL((_hdr)->csum_start, 0);\t\\\n-\tASSIGN_UNLESS_EQUAL((_hdr)->csum_offset, 0);\t\\\n-\tASSIGN_UNLESS_EQUAL((_hdr)->flags, 0);\t\t\\\n-\tASSIGN_UNLESS_EQUAL((_hdr)->gso_type, 0);\t\\\n-\tASSIGN_UNLESS_EQUAL((_hdr)->gso_size, 0);\t\\\n-\tASSIGN_UNLESS_EQUAL((_hdr)->hdr_len, 0);\t\\\n-} while (0)\n-\n-static inline void\n-virtqueue_xmit_offload(struct virtio_net_hdr *hdr,\n-\t\t\tstruct rte_mbuf *cookie,\n-\t\t\tbool offload)\n-{\n-\tif (offload) {\n-\t\tif (cookie->ol_flags & PKT_TX_TCP_SEG)\n-\t\t\tcookie->ol_flags |= PKT_TX_TCP_CKSUM;\n-\n-\t\tswitch (cookie->ol_flags & PKT_TX_L4_MASK) {\n-\t\tcase PKT_TX_UDP_CKSUM:\n-\t\t\thdr->csum_start = cookie->l2_len + cookie->l3_len;\n-\t\t\thdr->csum_offset = offsetof(struct rte_udp_hdr,\n-\t\t\t\tdgram_cksum);\n-\t\t\thdr->flags = VIRTIO_NET_HDR_F_NEEDS_CSUM;\n-\t\t\tbreak;\n-\n-\t\tcase PKT_TX_TCP_CKSUM:\n-\t\t\thdr->csum_start = cookie->l2_len + cookie->l3_len;\n-\t\t\thdr->csum_offset = offsetof(struct rte_tcp_hdr, cksum);\n-\t\t\thdr->flags = VIRTIO_NET_HDR_F_NEEDS_CSUM;\n-\t\t\tbreak;\n-\n-\t\tdefault:\n-\t\t\tASSIGN_UNLESS_EQUAL(hdr->csum_start, 0);\n-\t\t\tASSIGN_UNLESS_EQUAL(hdr->csum_offset, 0);\n-\t\t\tASSIGN_UNLESS_EQUAL(hdr->flags, 0);\n-\t\t\tbreak;\n-\t\t}\n \n-\t\t/* TCP Segmentation Offload */\n-\t\tif (cookie->ol_flags & PKT_TX_TCP_SEG) {\n-\t\t\thdr->gso_type = (cookie->ol_flags & PKT_TX_IPV6) ?\n-\t\t\t\tVIRTIO_NET_HDR_GSO_TCPV6 :\n-\t\t\t\tVIRTIO_NET_HDR_GSO_TCPV4;\n-\t\t\thdr->gso_size = cookie->tso_segsz;\n-\t\t\thdr->hdr_len =\n-\t\t\t\tcookie->l2_len +\n-\t\t\t\tcookie->l3_len +\n-\t\t\t\tcookie->l4_len;\n-\t\t} else {\n-\t\t\tASSIGN_UNLESS_EQUAL(hdr->gso_type, 0);\n-\t\t\tASSIGN_UNLESS_EQUAL(hdr->gso_size, 0);\n-\t\t\tASSIGN_UNLESS_EQUAL(hdr->hdr_len, 0);\n-\t\t}\n-\t}\n-}\n \n static inline void\n virtqueue_enqueue_xmit_inorder(struct virtnet_tx *txvq,\n@@ -725,102 +660,6 @@ virtqueue_enqueue_xmit_packed_fast(struct virtnet_tx *txvq,\n \tvirtqueue_store_flags_packed(dp, flags, vq->hw->weak_barriers);\n }\n \n-static inline void\n-virtqueue_enqueue_xmit_packed(struct virtnet_tx *txvq, struct rte_mbuf *cookie,\n-\t\t\t uint16_t needed, int can_push, int in_order)\n-{\n-\tstruct virtio_tx_region *txr = txvq->virtio_net_hdr_mz->addr;\n-\tstruct vq_desc_extra *dxp;\n-\tstruct virtqueue *vq = txvq->vq;\n-\tstruct vring_packed_desc *start_dp, *head_dp;\n-\tuint16_t idx, id, head_idx, head_flags;\n-\tint16_t head_size = vq->hw->vtnet_hdr_size;\n-\tstruct virtio_net_hdr *hdr;\n-\tuint16_t prev;\n-\tbool prepend_header = false;\n-\n-\tid = in_order ? vq->vq_avail_idx : vq->vq_desc_head_idx;\n-\n-\tdxp = &vq->vq_descx[id];\n-\tdxp->ndescs = needed;\n-\tdxp->cookie = cookie;\n-\n-\thead_idx = vq->vq_avail_idx;\n-\tidx = head_idx;\n-\tprev = head_idx;\n-\tstart_dp = vq->vq_packed.ring.desc;\n-\n-\thead_dp = &vq->vq_packed.ring.desc[idx];\n-\thead_flags = cookie->next ? VRING_DESC_F_NEXT : 0;\n-\thead_flags |= vq->vq_packed.cached_flags;\n-\n-\tif (can_push) {\n-\t\t/* prepend cannot fail, checked by caller */\n-\t\thdr = rte_pktmbuf_mtod_offset(cookie, struct virtio_net_hdr *,\n-\t\t\t\t\t -head_size);\n-\t\tprepend_header = true;\n-\n-\t\t/* if offload disabled, it is not zeroed below, do it now */\n-\t\tif (!vq->hw->has_tx_offload)\n-\t\t\tvirtqueue_clear_net_hdr(hdr);\n-\t} else {\n-\t\t/* setup first tx ring slot to point to header\n-\t\t * stored in reserved region.\n-\t\t */\n-\t\tstart_dp[idx].addr = txvq->virtio_net_hdr_mem +\n-\t\t\tRTE_PTR_DIFF(&txr[idx].tx_hdr, txr);\n-\t\tstart_dp[idx].len = vq->hw->vtnet_hdr_size;\n-\t\thdr = (struct virtio_net_hdr *)&txr[idx].tx_hdr;\n-\t\tidx++;\n-\t\tif (idx >= vq->vq_nentries) {\n-\t\t\tidx -= vq->vq_nentries;\n-\t\t\tvq->vq_packed.cached_flags ^=\n-\t\t\t\tVRING_PACKED_DESC_F_AVAIL_USED;\n-\t\t}\n-\t}\n-\n-\tvirtqueue_xmit_offload(hdr, cookie, vq->hw->has_tx_offload);\n-\n-\tdo {\n-\t\tuint16_t flags;\n-\n-\t\tstart_dp[idx].addr = VIRTIO_MBUF_DATA_DMA_ADDR(cookie, vq);\n-\t\tstart_dp[idx].len = cookie->data_len;\n-\t\tif (prepend_header) {\n-\t\t\tstart_dp[idx].addr -= head_size;\n-\t\t\tstart_dp[idx].len += head_size;\n-\t\t\tprepend_header = false;\n-\t\t}\n-\n-\t\tif (likely(idx != head_idx)) {\n-\t\t\tflags = cookie->next ? VRING_DESC_F_NEXT : 0;\n-\t\t\tflags |= vq->vq_packed.cached_flags;\n-\t\t\tstart_dp[idx].flags = flags;\n-\t\t}\n-\t\tprev = idx;\n-\t\tidx++;\n-\t\tif (idx >= vq->vq_nentries) {\n-\t\t\tidx -= vq->vq_nentries;\n-\t\t\tvq->vq_packed.cached_flags ^=\n-\t\t\t\tVRING_PACKED_DESC_F_AVAIL_USED;\n-\t\t}\n-\t} while ((cookie = cookie->next) != NULL);\n-\n-\tstart_dp[prev].id = id;\n-\n-\tvq->vq_free_cnt = (uint16_t)(vq->vq_free_cnt - needed);\n-\tvq->vq_avail_idx = idx;\n-\n-\tif (!in_order) {\n-\t\tvq->vq_desc_head_idx = dxp->next;\n-\t\tif (vq->vq_desc_head_idx == VQ_RING_DESC_CHAIN_END)\n-\t\t\tvq->vq_desc_tail_idx = VQ_RING_DESC_CHAIN_END;\n-\t}\n-\n-\tvirtqueue_store_flags_packed(head_dp, head_flags,\n-\t\t\t\t vq->hw->weak_barriers);\n-}\n-\n static inline void\n virtqueue_enqueue_xmit(struct virtnet_tx *txvq, struct rte_mbuf *cookie,\n \t\t\tuint16_t needed, int use_indirect, int can_push,\n@@ -1246,7 +1085,6 @@ virtio_rx_offload(struct rte_mbuf *m, struct virtio_net_hdr *hdr)\n \treturn 0;\n }\n \n-#define VIRTIO_MBUF_BURST_SZ 64\n #define DESC_PER_CACHELINE (RTE_CACHE_LINE_SIZE / sizeof(struct vring_desc))\n uint16_t\n virtio_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)\ndiff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h\nindex d293a3189..18ae34789 100644\n--- a/drivers/net/virtio/virtqueue.h\n+++ b/drivers/net/virtio/virtqueue.h\n@@ -563,4 +563,165 @@ virtqueue_notify(struct virtqueue *vq)\n #define VIRTQUEUE_DUMP(vq) do { } while (0)\n #endif\n \n+/* avoid write operation when necessary, to lessen cache issues */\n+#define ASSIGN_UNLESS_EQUAL(var, val) do {\t\\\n+\ttypeof(var) var_ = (var);\t\t\\\n+\ttypeof(val) val_ = (val);\t\t\\\n+\tif ((var_) != (val_))\t\t\t\\\n+\t\t(var_) = (val_);\t\t\\\n+} while (0)\n+\n+#define virtqueue_clear_net_hdr(hdr) do {\t\t\\\n+\ttypeof(hdr) hdr_ = (hdr);\t\t\t\\\n+\tASSIGN_UNLESS_EQUAL((hdr_)->csum_start, 0);\t\\\n+\tASSIGN_UNLESS_EQUAL((hdr_)->csum_offset, 0);\t\\\n+\tASSIGN_UNLESS_EQUAL((hdr_)->flags, 0);\t\t\\\n+\tASSIGN_UNLESS_EQUAL((hdr_)->gso_type, 0);\t\\\n+\tASSIGN_UNLESS_EQUAL((hdr_)->gso_size, 0);\t\\\n+\tASSIGN_UNLESS_EQUAL((hdr_)->hdr_len, 0);\t\\\n+} while (0)\n+\n+static inline void\n+virtqueue_xmit_offload(struct virtio_net_hdr *hdr,\n+\t\t\tstruct rte_mbuf *cookie,\n+\t\t\tbool offload)\n+{\n+\tif (offload) {\n+\t\tif (cookie->ol_flags & PKT_TX_TCP_SEG)\n+\t\t\tcookie->ol_flags |= PKT_TX_TCP_CKSUM;\n+\n+\t\tswitch (cookie->ol_flags & PKT_TX_L4_MASK) {\n+\t\tcase PKT_TX_UDP_CKSUM:\n+\t\t\thdr->csum_start = cookie->l2_len + cookie->l3_len;\n+\t\t\thdr->csum_offset = offsetof(struct rte_udp_hdr,\n+\t\t\t\tdgram_cksum);\n+\t\t\thdr->flags = VIRTIO_NET_HDR_F_NEEDS_CSUM;\n+\t\t\tbreak;\n+\n+\t\tcase PKT_TX_TCP_CKSUM:\n+\t\t\thdr->csum_start = cookie->l2_len + cookie->l3_len;\n+\t\t\thdr->csum_offset = offsetof(struct rte_tcp_hdr, cksum);\n+\t\t\thdr->flags = VIRTIO_NET_HDR_F_NEEDS_CSUM;\n+\t\t\tbreak;\n+\n+\t\tdefault:\n+\t\t\tASSIGN_UNLESS_EQUAL(hdr->csum_start, 0);\n+\t\t\tASSIGN_UNLESS_EQUAL(hdr->csum_offset, 0);\n+\t\t\tASSIGN_UNLESS_EQUAL(hdr->flags, 0);\n+\t\t\tbreak;\n+\t\t}\n+\n+\t\t/* TCP Segmentation Offload */\n+\t\tif (cookie->ol_flags & PKT_TX_TCP_SEG) {\n+\t\t\thdr->gso_type = (cookie->ol_flags & PKT_TX_IPV6) ?\n+\t\t\t\tVIRTIO_NET_HDR_GSO_TCPV6 :\n+\t\t\t\tVIRTIO_NET_HDR_GSO_TCPV4;\n+\t\t\thdr->gso_size = cookie->tso_segsz;\n+\t\t\thdr->hdr_len =\n+\t\t\t\tcookie->l2_len +\n+\t\t\t\tcookie->l3_len +\n+\t\t\t\tcookie->l4_len;\n+\t\t} else {\n+\t\t\tASSIGN_UNLESS_EQUAL(hdr->gso_type, 0);\n+\t\t\tASSIGN_UNLESS_EQUAL(hdr->gso_size, 0);\n+\t\t\tASSIGN_UNLESS_EQUAL(hdr->hdr_len, 0);\n+\t\t}\n+\t}\n+}\n+\n+static inline void\n+virtqueue_enqueue_xmit_packed(struct virtnet_tx *txvq, struct rte_mbuf *cookie,\n+\t\t\t uint16_t needed, int can_push, int in_order)\n+{\n+\tstruct virtio_tx_region *txr = txvq->virtio_net_hdr_mz->addr;\n+\tstruct vq_desc_extra *dxp;\n+\tstruct virtqueue *vq = txvq->vq;\n+\tstruct vring_packed_desc *start_dp, *head_dp;\n+\tuint16_t idx, id, head_idx, head_flags;\n+\tint16_t head_size = vq->hw->vtnet_hdr_size;\n+\tstruct virtio_net_hdr *hdr;\n+\tuint16_t prev;\n+\tbool prepend_header = false;\n+\n+\tid = in_order ? vq->vq_avail_idx : vq->vq_desc_head_idx;\n+\n+\tdxp = &vq->vq_descx[id];\n+\tdxp->ndescs = needed;\n+\tdxp->cookie = cookie;\n+\n+\thead_idx = vq->vq_avail_idx;\n+\tidx = head_idx;\n+\tprev = head_idx;\n+\tstart_dp = vq->vq_packed.ring.desc;\n+\n+\thead_dp = &vq->vq_packed.ring.desc[idx];\n+\thead_flags = cookie->next ? VRING_DESC_F_NEXT : 0;\n+\thead_flags |= vq->vq_packed.cached_flags;\n+\n+\tif (can_push) {\n+\t\t/* prepend cannot fail, checked by caller */\n+\t\thdr = rte_pktmbuf_mtod_offset(cookie, struct virtio_net_hdr *,\n+\t\t\t\t\t -head_size);\n+\t\tprepend_header = true;\n+\n+\t\t/* if offload disabled, it is not zeroed below, do it now */\n+\t\tif (!vq->hw->has_tx_offload)\n+\t\t\tvirtqueue_clear_net_hdr(hdr);\n+\t} else {\n+\t\t/* setup first tx ring slot to point to header\n+\t\t * stored in reserved region.\n+\t\t */\n+\t\tstart_dp[idx].addr = txvq->virtio_net_hdr_mem +\n+\t\t\tRTE_PTR_DIFF(&txr[idx].tx_hdr, txr);\n+\t\tstart_dp[idx].len = vq->hw->vtnet_hdr_size;\n+\t\thdr = (struct virtio_net_hdr *)&txr[idx].tx_hdr;\n+\t\tidx++;\n+\t\tif (idx >= vq->vq_nentries) {\n+\t\t\tidx -= vq->vq_nentries;\n+\t\t\tvq->vq_packed.cached_flags ^=\n+\t\t\t\tVRING_PACKED_DESC_F_AVAIL_USED;\n+\t\t}\n+\t}\n+\n+\tvirtqueue_xmit_offload(hdr, cookie, vq->hw->has_tx_offload);\n+\n+\tdo {\n+\t\tuint16_t flags;\n+\n+\t\tstart_dp[idx].addr = VIRTIO_MBUF_DATA_DMA_ADDR(cookie, vq);\n+\t\tstart_dp[idx].len = cookie->data_len;\n+\t\tif (prepend_header) {\n+\t\t\tstart_dp[idx].addr -= head_size;\n+\t\t\tstart_dp[idx].len += head_size;\n+\t\t\tprepend_header = false;\n+\t\t}\n+\n+\t\tif (likely(idx != head_idx)) {\n+\t\t\tflags = cookie->next ? VRING_DESC_F_NEXT : 0;\n+\t\t\tflags |= vq->vq_packed.cached_flags;\n+\t\t\tstart_dp[idx].flags = flags;\n+\t\t}\n+\t\tprev = idx;\n+\t\tidx++;\n+\t\tif (idx >= vq->vq_nentries) {\n+\t\t\tidx -= vq->vq_nentries;\n+\t\t\tvq->vq_packed.cached_flags ^=\n+\t\t\t\tVRING_PACKED_DESC_F_AVAIL_USED;\n+\t\t}\n+\t} while ((cookie = cookie->next) != NULL);\n+\n+\tstart_dp[prev].id = id;\n+\n+\tvq->vq_free_cnt = (uint16_t)(vq->vq_free_cnt - needed);\n+\tvq->vq_avail_idx = idx;\n+\n+\tif (!in_order) {\n+\t\tvq->vq_desc_head_idx = dxp->next;\n+\t\tif (vq->vq_desc_head_idx == VQ_RING_DESC_CHAIN_END)\n+\t\t\tvq->vq_desc_tail_idx = VQ_RING_DESC_CHAIN_END;\n+\t}\n+\n+\tvirtqueue_store_flags_packed(head_dp, head_flags,\n+\t\t\t\t vq->hw->weak_barriers);\n+}\n #endif /* _VIRTQUEUE_H_ */\n", "prefixes": [ "v9", "6/9" ] }{ "id": 69214, "url": "