Patch Detail
get:
Show a patch.
patch:
Update a patch.
put:
Update a patch.
GET /api/patches/49212/?format=api
https://patches.dpdk.org/api/patches/49212/?format=api", "web_url": "https://patches.dpdk.org/project/dpdk/patch/20181220172718.9615-4-maxime.coquelin@redhat.com/", "project": { "id": 1, "url": "https://patches.dpdk.org/api/projects/1/?format=api", "name": "DPDK", "link_name": "dpdk", "list_id": "dev.dpdk.org", "list_email": "dev@dpdk.org", "web_url": "http://core.dpdk.org", "scm_url": "git://dpdk.org/dpdk", "webscm_url": "http://git.dpdk.org/dpdk", "list_archive_url": "https://inbox.dpdk.org/dev", "list_archive_url_format": "https://inbox.dpdk.org/dev/{}", "commit_url_format": "" }, "msgid": "<20181220172718.9615-4-maxime.coquelin@redhat.com>", "list_archive_url": "https://inbox.dpdk.org/dev/20181220172718.9615-4-maxime.coquelin@redhat.com", "date": "2018-12-20T17:27:18", "name": "[v3,3/3] net/virtio: improve batching in mergeable path", "commit_ref": null, "pull_url": null, "state": "accepted", "archived": true, "hash": "de25e56dd9bd0330838f6470c2561dbcf2438065", "submitter": { "id": 512, "url": "https://patches.dpdk.org/api/people/512/?format=api", "name": "Maxime Coquelin", "email": "maxime.coquelin@redhat.com" }, "delegate": { "id": 2642, "url": "https://patches.dpdk.org/api/users/2642/?format=api", "username": "mcoquelin", "first_name": "Maxime", "last_name": "Coquelin", "email": "maxime.coquelin@redhat.com" }, "mbox": "https://patches.dpdk.org/project/dpdk/patch/20181220172718.9615-4-maxime.coquelin@redhat.com/mbox/", "series": [ { "id": 2908, "url": "https://patches.dpdk.org/api/series/2908/?format=api", "web_url": "https://patches.dpdk.org/project/dpdk/list/?series=2908", "date": "2018-12-20T17:27:15", "name": "net/virtio: Rx paths improvements", "version": 3, "mbox": "https://patches.dpdk.org/series/2908/mbox/" } ], "comments": "https://patches.dpdk.org/api/patches/49212/comments/", "check": "success", "checks": "https://patches.dpdk.org/api/patches/49212/checks/", "tags": {}, "related": [], "headers": { "Return-Path": "<dev-bounces@dpdk.org>", "X-Original-To": "patchwork@dpdk.org", "Delivered-To": "patchwork@dpdk.org", "Received": [ "from [92.243.14.124] (localhost [127.0.0.1])\n\tby dpdk.org (Postfix) with ESMTP id E293B1BDEC;\n\tThu, 20 Dec 2018 18:27:38 +0100 (CET)", "from mx1.redhat.com (mx1.redhat.com [209.132.183.28])\n\tby dpdk.org (Postfix) with ESMTP id 282FD1BDEB\n\tfor <dev@dpdk.org>; Thu, 20 Dec 2018 18:27:37 +0100 (CET)", "from smtp.corp.redhat.com\n\t(int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16])\n\t(using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))\n\t(No client certificate requested)\n\tby mx1.redhat.com (Postfix) with ESMTPS id 8A3798E3FF;\n\tThu, 20 Dec 2018 17:27:36 +0000 (UTC)", "from localhost.localdomain (ovpn-112-60.ams2.redhat.com\n\t[10.36.112.60])\n\tby smtp.corp.redhat.com (Postfix) with ESMTP id 7E85769284;\n\tThu, 20 Dec 2018 17:27:34 +0000 (UTC)" ], "From": "Maxime Coquelin <maxime.coquelin@redhat.com>", "To": "dev@dpdk.org, jfreimann@redhat.com, tiwei.bie@intel.com,\n\tzhihong.wang@intel.com", "Cc": "Maxime Coquelin <maxime.coquelin@redhat.com>", "Date": "Thu, 20 Dec 2018 18:27:18 +0100", "Message-Id": "<20181220172718.9615-4-maxime.coquelin@redhat.com>", "In-Reply-To": "<20181220172718.9615-1-maxime.coquelin@redhat.com>", "References": "<20181220172718.9615-1-maxime.coquelin@redhat.com>", "X-Scanned-By": "MIMEDefang 2.79 on 10.5.11.16", "X-Greylist": "Sender IP whitelisted, not delayed by milter-greylist-4.5.16\n\t(mx1.redhat.com [10.5.110.25]); Thu, 20 Dec 2018 17:27:36 +0000 (UTC)", "Subject": "[dpdk-dev] [PATCH v3 3/3] net/virtio: improve batching in mergeable\n\tpath", "X-BeenThere": "dev@dpdk.org", "X-Mailman-Version": "2.1.15", "Precedence": "list", "List-Id": "DPDK patches and discussions <dev.dpdk.org>", "List-Unsubscribe": "<https://mails.dpdk.org/options/dev>,\n\t<mailto:dev-request@dpdk.org?subject=unsubscribe>", "List-Archive": "<http://mails.dpdk.org/archives/dev/>", "List-Post": "<mailto:dev@dpdk.org>", "List-Help": "<mailto:dev-request@dpdk.org?subject=help>", "List-Subscribe": "<https://mails.dpdk.org/listinfo/dev>,\n\t<mailto:dev-request@dpdk.org?subject=subscribe>", "Errors-To": "dev-bounces@dpdk.org", "Sender": "\"dev\" <dev-bounces@dpdk.org>" }, "content": "This patch improves both descriptors dequeue and refill,\nby using the same batching strategy as done in in-order path.\n\nSigned-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>\nTested-by: Jens Freimann <jfreimann@redhat.com>\nReviewed-by: Jens Freimann <jfreimann@redhat.com>\n---\n drivers/net/virtio/virtio_rxtx.c | 239 +++++++++++++++++--------------\n 1 file changed, 129 insertions(+), 110 deletions(-)", "diff": "diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c\nindex 58376ced3..1cfa2f0d6 100644\n--- a/drivers/net/virtio/virtio_rxtx.c\n+++ b/drivers/net/virtio/virtio_rxtx.c\n@@ -353,41 +353,44 @@ virtqueue_enqueue_refill_inorder(struct virtqueue *vq,\n }\n \n static inline int\n-virtqueue_enqueue_recv_refill(struct virtqueue *vq, struct rte_mbuf *cookie)\n+virtqueue_enqueue_recv_refill(struct virtqueue *vq, struct rte_mbuf **cookie,\n+\t\t\t\tuint16_t num)\n {\n \tstruct vq_desc_extra *dxp;\n \tstruct virtio_hw *hw = vq->hw;\n-\tstruct vring_desc *start_dp;\n-\tuint16_t needed = 1;\n-\tuint16_t head_idx, idx;\n+\tstruct vring_desc *start_dp = vq->vq_ring.desc;\n+\tuint16_t idx, i;\n \n \tif (unlikely(vq->vq_free_cnt == 0))\n \t\treturn -ENOSPC;\n-\tif (unlikely(vq->vq_free_cnt < needed))\n+\tif (unlikely(vq->vq_free_cnt < num))\n \t\treturn -EMSGSIZE;\n \n-\thead_idx = vq->vq_desc_head_idx;\n-\tif (unlikely(head_idx >= vq->vq_nentries))\n+\tif (unlikely(vq->vq_desc_head_idx >= vq->vq_nentries))\n \t\treturn -EFAULT;\n \n-\tidx = head_idx;\n-\tdxp = &vq->vq_descx[idx];\n-\tdxp->cookie = (void *)cookie;\n-\tdxp->ndescs = needed;\n+\tfor (i = 0; i < num; i++) {\n+\t\tidx = vq->vq_desc_head_idx;\n+\t\tdxp = &vq->vq_descx[idx];\n+\t\tdxp->cookie = (void *)cookie[i];\n+\t\tdxp->ndescs = 1;\n \n-\tstart_dp = vq->vq_ring.desc;\n-\tstart_dp[idx].addr =\n-\t\tVIRTIO_MBUF_ADDR(cookie, vq) +\n-\t\tRTE_PKTMBUF_HEADROOM - hw->vtnet_hdr_size;\n-\tstart_dp[idx].len =\n-\t\tcookie->buf_len - RTE_PKTMBUF_HEADROOM + hw->vtnet_hdr_size;\n-\tstart_dp[idx].flags = VRING_DESC_F_WRITE;\n-\tidx = start_dp[idx].next;\n-\tvq->vq_desc_head_idx = idx;\n-\tif (vq->vq_desc_head_idx == VQ_RING_DESC_CHAIN_END)\n-\t\tvq->vq_desc_tail_idx = idx;\n-\tvq->vq_free_cnt = (uint16_t)(vq->vq_free_cnt - needed);\n-\tvq_update_avail_ring(vq, head_idx);\n+\t\tstart_dp[idx].addr =\n+\t\t\tVIRTIO_MBUF_ADDR(cookie[i], vq) +\n+\t\t\tRTE_PKTMBUF_HEADROOM - hw->vtnet_hdr_size;\n+\t\tstart_dp[idx].len =\n+\t\t\tcookie[i]->buf_len - RTE_PKTMBUF_HEADROOM +\n+\t\t\thw->vtnet_hdr_size;\n+\t\tstart_dp[idx].flags = VRING_DESC_F_WRITE;\n+\t\tvq->vq_desc_head_idx = start_dp[idx].next;\n+\t\tvq_update_avail_ring(vq, idx);\n+\t\tif (vq->vq_desc_head_idx == VQ_RING_DESC_CHAIN_END) {\n+\t\t\tvq->vq_desc_tail_idx = vq->vq_desc_head_idx;\n+\t\t\tbreak;\n+\t\t}\n+\t}\n+\n+\tvq->vq_free_cnt = (uint16_t)(vq->vq_free_cnt - num);\n \n \treturn 0;\n }\n@@ -892,7 +895,8 @@ virtio_dev_rx_queue_setup_finish(struct rte_eth_dev *dev, uint16_t queue_idx)\n \t\t\t\terror = virtqueue_enqueue_recv_refill_packed(vq,\n \t\t\t\t\t\t&m, 1);\n \t\t\telse\n-\t\t\t\terror = virtqueue_enqueue_recv_refill(vq, m);\n+\t\t\t\terror = virtqueue_enqueue_recv_refill(vq,\n+\t\t\t\t\t\t&m, 1);\n \t\t\tif (error) {\n \t\t\t\trte_pktmbuf_free(m);\n \t\t\t\tbreak;\n@@ -991,7 +995,7 @@ virtio_discard_rxbuf(struct virtqueue *vq, struct rte_mbuf *m)\n \tif (vtpci_packed_queue(vq->hw))\n \t\terror = virtqueue_enqueue_recv_refill_packed(vq, &m, 1);\n \telse\n-\t\terror = virtqueue_enqueue_recv_refill(vq, m);\n+\t\terror = virtqueue_enqueue_recv_refill(vq, &m, 1);\n \n \tif (unlikely(error)) {\n \t\tRTE_LOG(ERR, PMD, \"cannot requeue discarded mbuf\");\n@@ -1211,7 +1215,7 @@ virtio_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)\n \t\t\tdev->data->rx_mbuf_alloc_failed++;\n \t\t\tbreak;\n \t\t}\n-\t\terror = virtqueue_enqueue_recv_refill(vq, new_mbuf);\n+\t\terror = virtqueue_enqueue_recv_refill(vq, &new_mbuf, 1);\n \t\tif (unlikely(error)) {\n \t\t\trte_pktmbuf_free(new_mbuf);\n \t\t\tbreak;\n@@ -1528,19 +1532,18 @@ virtio_recv_mergeable_pkts(void *rx_queue,\n \tstruct virtnet_rx *rxvq = rx_queue;\n \tstruct virtqueue *vq = rxvq->vq;\n \tstruct virtio_hw *hw = vq->hw;\n-\tstruct rte_mbuf *rxm, *new_mbuf;\n-\tuint16_t nb_used, num, nb_rx;\n+\tstruct rte_mbuf *rxm;\n+\tstruct rte_mbuf *prev;\n+\tuint16_t nb_used, num, nb_rx = 0;\n \tuint32_t len[VIRTIO_MBUF_BURST_SZ];\n \tstruct rte_mbuf *rcv_pkts[VIRTIO_MBUF_BURST_SZ];\n-\tstruct rte_mbuf *prev;\n \tint error;\n-\tuint32_t i, nb_enqueued;\n-\tuint32_t seg_num;\n-\tuint16_t extra_idx;\n-\tuint32_t seg_res;\n-\tuint32_t hdr_size;\n+\tuint32_t nb_enqueued = 0;\n+\tuint32_t seg_num = 0;\n+\tuint32_t seg_res = 0;\n+\tuint32_t hdr_size = hw->vtnet_hdr_size;\n+\tint32_t i;\n \n-\tnb_rx = 0;\n \tif (unlikely(hw->started == 0))\n \t\treturn nb_rx;\n \n@@ -1550,31 +1553,25 @@ virtio_recv_mergeable_pkts(void *rx_queue,\n \n \tPMD_RX_LOG(DEBUG, \"used:%d\", nb_used);\n \n-\ti = 0;\n-\tnb_enqueued = 0;\n-\tseg_num = 0;\n-\textra_idx = 0;\n-\tseg_res = 0;\n-\thdr_size = hw->vtnet_hdr_size;\n-\n-\twhile (i < nb_used) {\n-\t\tstruct virtio_net_hdr_mrg_rxbuf *header;\n+\tnum = likely(nb_used <= nb_pkts) ? nb_used : nb_pkts;\n+\tif (unlikely(num > VIRTIO_MBUF_BURST_SZ))\n+\t\tnum = VIRTIO_MBUF_BURST_SZ;\n+\tif (likely(num > DESC_PER_CACHELINE))\n+\t\tnum = num - ((vq->vq_used_cons_idx + num) %\n+\t\t\t\tDESC_PER_CACHELINE);\n \n-\t\tif (nb_rx == nb_pkts)\n-\t\t\tbreak;\n \n-\t\tnum = virtqueue_dequeue_burst_rx(vq, rcv_pkts, len, 1);\n-\t\tif (num != 1)\n-\t\t\tcontinue;\n+\tnum = virtqueue_dequeue_burst_rx(vq, rcv_pkts, len, num);\n \n-\t\ti++;\n+\tfor (i = 0; i < num; i++) {\n+\t\tstruct virtio_net_hdr_mrg_rxbuf *header;\n \n \t\tPMD_RX_LOG(DEBUG, \"dequeue:%d\", num);\n-\t\tPMD_RX_LOG(DEBUG, \"packet len:%d\", len[0]);\n+\t\tPMD_RX_LOG(DEBUG, \"packet len:%d\", len[i]);\n \n-\t\trxm = rcv_pkts[0];\n+\t\trxm = rcv_pkts[i];\n \n-\t\tif (unlikely(len[0] < hdr_size + ETHER_HDR_LEN)) {\n+\t\tif (unlikely(len[i] < hdr_size + ETHER_HDR_LEN)) {\n \t\t\tPMD_RX_LOG(ERR, \"Packet drop\");\n \t\t\tnb_enqueued++;\n \t\t\tvirtio_discard_rxbuf(vq, rxm);\n@@ -1582,10 +1579,10 @@ virtio_recv_mergeable_pkts(void *rx_queue,\n \t\t\tcontinue;\n \t\t}\n \n-\t\theader = (struct virtio_net_hdr_mrg_rxbuf *)((char *)rxm->buf_addr +\n-\t\t\tRTE_PKTMBUF_HEADROOM - hdr_size);\n+\t\theader = (struct virtio_net_hdr_mrg_rxbuf *)\n+\t\t\t ((char *)rxm->buf_addr + RTE_PKTMBUF_HEADROOM\n+\t\t\t - hdr_size);\n \t\tseg_num = header->num_buffers;\n-\n \t\tif (seg_num == 0)\n \t\t\tseg_num = 1;\n \n@@ -1593,10 +1590,11 @@ virtio_recv_mergeable_pkts(void *rx_queue,\n \t\trxm->nb_segs = seg_num;\n \t\trxm->ol_flags = 0;\n \t\trxm->vlan_tci = 0;\n-\t\trxm->pkt_len = (uint32_t)(len[0] - hdr_size);\n-\t\trxm->data_len = (uint16_t)(len[0] - hdr_size);\n+\t\trxm->pkt_len = (uint32_t)(len[i] - hdr_size);\n+\t\trxm->data_len = (uint16_t)(len[i] - hdr_size);\n \n \t\trxm->port = rxvq->port_id;\n+\n \t\trx_pkts[nb_rx] = rxm;\n \t\tprev = rxm;\n \n@@ -1607,75 +1605,96 @@ virtio_recv_mergeable_pkts(void *rx_queue,\n \t\t\tcontinue;\n \t\t}\n \n+\t\tif (hw->vlan_strip)\n+\t\t\trte_vlan_strip(rx_pkts[nb_rx]);\n+\n \t\tseg_res = seg_num - 1;\n \n-\t\twhile (seg_res != 0) {\n-\t\t\t/*\n-\t\t\t * Get extra segments for current uncompleted packet.\n-\t\t\t */\n-\t\t\tuint16_t rcv_cnt =\n-\t\t\t\tRTE_MIN(seg_res, RTE_DIM(rcv_pkts));\n-\t\t\tif (likely(VIRTQUEUE_NUSED(vq) >= rcv_cnt)) {\n-\t\t\t\tuint32_t rx_num =\n-\t\t\t\t\tvirtqueue_dequeue_burst_rx(vq,\n-\t\t\t\t\trcv_pkts, len, rcv_cnt);\n-\t\t\t\ti += rx_num;\n-\t\t\t\trcv_cnt = rx_num;\n-\t\t\t} else {\n-\t\t\t\tPMD_RX_LOG(ERR,\n-\t\t\t\t\t \"No enough segments for packet.\");\n-\t\t\t\tnb_enqueued++;\n-\t\t\t\tvirtio_discard_rxbuf(vq, rxm);\n-\t\t\t\trxvq->stats.errors++;\n-\t\t\t\tbreak;\n-\t\t\t}\n+\t\t/* Merge remaining segments */\n+\t\twhile (seg_res != 0 && i < (num - 1)) {\n+\t\t\ti++;\n+\n+\t\t\trxm = rcv_pkts[i];\n+\t\t\trxm->data_off = RTE_PKTMBUF_HEADROOM - hdr_size;\n+\t\t\trxm->pkt_len = (uint32_t)(len[i]);\n+\t\t\trxm->data_len = (uint16_t)(len[i]);\n+\n+\t\t\trx_pkts[nb_rx]->pkt_len += (uint32_t)(len[i]);\n+\t\t\trx_pkts[nb_rx]->data_len += (uint16_t)(len[i]);\n+\n+\t\t\tif (prev)\n+\t\t\t\tprev->next = rxm;\n+\n+\t\t\tprev = rxm;\n+\t\t\tseg_res -= 1;\n+\t\t}\n+\n+\t\tif (!seg_res) {\n+\t\t\tvirtio_rx_stats_updated(rxvq, rx_pkts[nb_rx]);\n+\t\t\tnb_rx++;\n+\t\t}\n+\t}\n+\n+\t/* Last packet still need merge segments */\n+\twhile (seg_res != 0) {\n+\t\tuint16_t rcv_cnt = RTE_MIN((uint16_t)seg_res,\n+\t\t\t\t\tVIRTIO_MBUF_BURST_SZ);\n \n-\t\t\textra_idx = 0;\n+\t\tprev = rcv_pkts[nb_rx];\n+\t\tif (likely(VIRTQUEUE_NUSED(vq) >= rcv_cnt)) {\n+\t\t\tnum = virtqueue_dequeue_burst_rx(vq, rcv_pkts, len,\n+\t\t\t\t\t\t\t rcv_cnt);\n+\t\t\tuint16_t extra_idx = 0;\n \n+\t\t\trcv_cnt = num;\n \t\t\twhile (extra_idx < rcv_cnt) {\n \t\t\t\trxm = rcv_pkts[extra_idx];\n-\n-\t\t\t\trxm->data_off = RTE_PKTMBUF_HEADROOM - hdr_size;\n+\t\t\t\trxm->data_off =\n+\t\t\t\t\tRTE_PKTMBUF_HEADROOM - hdr_size;\n \t\t\t\trxm->pkt_len = (uint32_t)(len[extra_idx]);\n \t\t\t\trxm->data_len = (uint16_t)(len[extra_idx]);\n-\n-\t\t\t\tif (prev)\n-\t\t\t\t\tprev->next = rxm;\n-\n+\t\t\t\tprev->next = rxm;\n \t\t\t\tprev = rxm;\n-\t\t\t\trx_pkts[nb_rx]->pkt_len += rxm->pkt_len;\n-\t\t\t\textra_idx++;\n+\t\t\t\trx_pkts[nb_rx]->pkt_len += len[extra_idx];\n+\t\t\t\trx_pkts[nb_rx]->data_len += len[extra_idx];\n+\t\t\t\textra_idx += 1;\n \t\t\t};\n \t\t\tseg_res -= rcv_cnt;\n-\t\t}\n-\n-\t\tif (hw->vlan_strip)\n-\t\t\trte_vlan_strip(rx_pkts[nb_rx]);\n-\n-\t\tVIRTIO_DUMP_PACKET(rx_pkts[nb_rx],\n-\t\t\trx_pkts[nb_rx]->data_len);\n \n-\t\tvirtio_update_packet_stats(&rxvq->stats, rx_pkts[nb_rx]);\n-\t\tnb_rx++;\n+\t\t\tif (!seg_res) {\n+\t\t\t\tvirtio_rx_stats_updated(rxvq, rx_pkts[nb_rx]);\n+\t\t\t\tnb_rx++;\n+\t\t\t}\n+\t\t} else {\n+\t\t\tPMD_RX_LOG(ERR,\n+\t\t\t\t\t\"No enough segments for packet.\");\n+\t\t\tvirtio_discard_rxbuf(vq, prev);\n+\t\t\trxvq->stats.errors++;\n+\t\t\tbreak;\n+\t\t}\n \t}\n \n \trxvq->stats.packets += nb_rx;\n \n \t/* Allocate new mbuf for the used descriptor */\n-\twhile (likely(!virtqueue_full(vq))) {\n-\t\tnew_mbuf = rte_mbuf_raw_alloc(rxvq->mpool);\n-\t\tif (unlikely(new_mbuf == NULL)) {\n-\t\t\tstruct rte_eth_dev *dev\n-\t\t\t\t= &rte_eth_devices[rxvq->port_id];\n-\t\t\tdev->data->rx_mbuf_alloc_failed++;\n-\t\t\tbreak;\n-\t\t}\n-\t\terror = virtqueue_enqueue_recv_refill(vq, new_mbuf);\n-\t\tif (unlikely(error)) {\n-\t\t\trte_pktmbuf_free(new_mbuf);\n-\t\t\tbreak;\n+\tif (likely(!virtqueue_full(vq))) {\n+\t\t/* free_cnt may include mrg descs */\n+\t\tuint16_t free_cnt = vq->vq_free_cnt;\n+\t\tstruct rte_mbuf *new_pkts[free_cnt];\n+\n+\t\tif (!rte_pktmbuf_alloc_bulk(rxvq->mpool, new_pkts, free_cnt)) {\n+\t\t\terror = virtqueue_enqueue_recv_refill(vq, new_pkts,\n+\t\t\t\t\tfree_cnt);\n+\t\t\tif (unlikely(error)) {\n+\t\t\t\tfor (i = 0; i < free_cnt; i++)\n+\t\t\t\t\trte_pktmbuf_free(new_pkts[i]);\n+\t\t\t}\n+\t\t\tnb_enqueued += free_cnt;\n+\t\t} else {\n+\t\t\tstruct rte_eth_dev *dev =\n+\t\t\t\t&rte_eth_devices[rxvq->port_id];\n+\t\t\tdev->data->rx_mbuf_alloc_failed += free_cnt;\n \t\t}\n-\t\tnb_enqueued++;\n \t}\n \n \tif (likely(nb_enqueued)) {\n", "prefixes": [ "v3", "3/3" ] }{ "id": 49212, "url": "