Patch Detail
get:
Show a patch.
patch:
Update a patch.
put:
Update a patch.
GET /api/patches/91634/?format=api
https://patches.dpdk.org/api/patches/91634/?format=api", "web_url": "https://patches.dpdk.org/project/dpdk/patch/09c440a88d9c546ebbe7d5d4cd2e0d2e53e4e870.1618568597.git.bnemeth@redhat.com/", "project": { "id": 1, "url": "https://patches.dpdk.org/api/projects/1/?format=api", "name": "DPDK", "link_name": "dpdk", "list_id": "dev.dpdk.org", "list_email": "dev@dpdk.org", "web_url": "http://core.dpdk.org", "scm_url": "git://dpdk.org/dpdk", "webscm_url": "http://git.dpdk.org/dpdk", "list_archive_url": "https://inbox.dpdk.org/dev", "list_archive_url_format": "https://inbox.dpdk.org/dev/{}", "commit_url_format": "" }, "msgid": "<09c440a88d9c546ebbe7d5d4cd2e0d2e53e4e870.1618568597.git.bnemeth@redhat.com>", "list_archive_url": "https://inbox.dpdk.org/dev/09c440a88d9c546ebbe7d5d4cd2e0d2e53e4e870.1618568597.git.bnemeth@redhat.com", "date": "2021-04-16T10:25:19", "name": "[v5] vhost: allocate and free packets in bulk in Tx packed", "commit_ref": null, "pull_url": null, "state": "accepted", "archived": true, "hash": "85645bb35694fda5ad7d4b28bf0b97a4d72fc2d8", "submitter": { "id": 2072, "url": "https://patches.dpdk.org/api/people/2072/?format=api", "name": "Balazs Nemeth", "email": "bnemeth@redhat.com" }, "delegate": { "id": 2642, "url": "https://patches.dpdk.org/api/users/2642/?format=api", "username": "mcoquelin", "first_name": "Maxime", "last_name": "Coquelin", "email": "maxime.coquelin@redhat.com" }, "mbox": "https://patches.dpdk.org/project/dpdk/patch/09c440a88d9c546ebbe7d5d4cd2e0d2e53e4e870.1618568597.git.bnemeth@redhat.com/mbox/", "series": [ { "id": 16444, "url": "https://patches.dpdk.org/api/series/16444/?format=api", "web_url": "https://patches.dpdk.org/project/dpdk/list/?series=16444", "date": "2021-04-16T10:25:19", "name": "[v5] vhost: allocate and free packets in bulk in Tx packed", "version": 5, "mbox": "https://patches.dpdk.org/series/16444/mbox/" } ], "comments": "https://patches.dpdk.org/api/patches/91634/comments/", "check": "warning", "checks": "https://patches.dpdk.org/api/patches/91634/checks/", "tags": {}, "related": [], "headers": { "Return-Path": "<dev-bounces@dpdk.org>", "X-Original-To": "patchwork@inbox.dpdk.org", "Delivered-To": "patchwork@inbox.dpdk.org", "Received": [ "from mails.dpdk.org (mails.dpdk.org [217.70.189.124])\n\tby inbox.dpdk.org (Postfix) with ESMTP id 376EFA0C40;\n\tFri, 16 Apr 2021 12:26:17 +0200 (CEST)", "from [217.70.189.124] (localhost [127.0.0.1])\n\tby mails.dpdk.org (Postfix) with ESMTP id CFD85141CD3;\n\tFri, 16 Apr 2021 12:26:16 +0200 (CEST)", "from us-smtp-delivery-124.mimecast.com\n (us-smtp-delivery-124.mimecast.com [170.10.133.124])\n by mails.dpdk.org (Postfix) with ESMTP id B2CC2141CD0\n for <dev@dpdk.org>; Fri, 16 Apr 2021 12:26:14 +0200 (CEST)", "from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com\n [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id\n us-mta-480-Avn6y_OiMXSsm-qmrDlU0w-1; Fri, 16 Apr 2021 06:26:12 -0400", "from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com\n [10.5.11.11])\n (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))\n (No client certificate requested)\n by mimecast-mx01.redhat.com (Postfix) with ESMTPS id D8FD119251A1\n for <dev@dpdk.org>; Fri, 16 Apr 2021 10:26:11 +0000 (UTC)", "from bnemeth.users.ipa.redhat.com (ovpn-114-172.ams2.redhat.com\n [10.36.114.172])\n by smtp.corp.redhat.com (Postfix) with ESMTP id 2B1CB6E0A3;\n Fri, 16 Apr 2021 10:25:54 +0000 (UTC)" ], "DKIM-Signature": "v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;\n s=mimecast20190719; t=1618568774;\n h=from:from:reply-to:subject:subject:date:date:message-id:message-id:\n to:to:cc:cc:mime-version:mime-version:content-type:content-type:\n content-transfer-encoding:content-transfer-encoding:\n in-reply-to:in-reply-to:references:references;\n bh=LoZ9kBC5CSrddPIyCKR78/Su9+kvQEDYp/Hct1qi0Yo=;\n b=e4FF33HqxlF8QCnsDZr3PP0QdS+THOUCbfauDRceXHXmKllGP1UCTvh0DbMln9Gj9pfTHQ\n ZrNzFMf+3JzubELwg8tWOFqTJJMYFl38/kSQN7A/ioWeJIOXZ8Nd8ffpjGNDiFGIS1a4UG\n M5R6o72JyFba0JlGUubvTV5ZtbyTRAY=", "X-MC-Unique": "Avn6y_OiMXSsm-qmrDlU0w-1", "From": "Balazs Nemeth <bnemeth@redhat.com>", "To": "bnemeth@redhat.com,\n\tdev@dpdk.org", "Cc": "maxime.coquelin@redhat.com,\n\tdavid.marchand@redhat.com", "Date": "Fri, 16 Apr 2021 12:25:19 +0200", "Message-Id": "\n <09c440a88d9c546ebbe7d5d4cd2e0d2e53e4e870.1618568597.git.bnemeth@redhat.com>", "In-Reply-To": "\n <f2eccbfa8a1f7aaa00f2da69ea9cb9a959f28e4f.1618566506.git.bnemeth@redhat.com>", "References": "\n <f2eccbfa8a1f7aaa00f2da69ea9cb9a959f28e4f.1618566506.git.bnemeth@redhat.com>", "MIME-Version": "1.0", "X-Scanned-By": "MIMEDefang 2.79 on 10.5.11.11", "Authentication-Results": "relay.mimecast.com;\n auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=bnemeth@redhat.com", "X-Mimecast-Spam-Score": "0", "X-Mimecast-Originator": "redhat.com", "Content-Transfer-Encoding": "8bit", "Content-Type": "text/plain; charset=\"US-ASCII\"", "Subject": "[dpdk-dev] [PATCH v5] vhost: allocate and free packets in bulk in\n Tx packed", "X-BeenThere": "dev@dpdk.org", "X-Mailman-Version": "2.1.29", "Precedence": "list", "List-Id": "DPDK patches and discussions <dev.dpdk.org>", "List-Unsubscribe": "<https://mails.dpdk.org/options/dev>,\n <mailto:dev-request@dpdk.org?subject=unsubscribe>", "List-Archive": "<http://mails.dpdk.org/archives/dev/>", "List-Post": "<mailto:dev@dpdk.org>", "List-Help": "<mailto:dev-request@dpdk.org?subject=help>", "List-Subscribe": "<https://mails.dpdk.org/listinfo/dev>,\n <mailto:dev-request@dpdk.org?subject=subscribe>", "Errors-To": "dev-bounces@dpdk.org", "Sender": "\"dev\" <dev-bounces@dpdk.org>" }, "content": "Move allocation out further and perform all allocation in bulk. The same\ngoes for freeing packets. In the process, also introduce\nvirtio_dev_pktmbuf_prep and make virtio_dev_pktmbuf_alloc use that.\n\nSigned-off-by: Balazs Nemeth <bnemeth@redhat.com>\n---\n lib/librte_vhost/virtio_net.c | 80 +++++++++++++++++++----------------\n 1 file changed, 44 insertions(+), 36 deletions(-)", "diff": "diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c\nindex ff39878609..77a00c9499 100644\n--- a/lib/librte_vhost/virtio_net.c\n+++ b/lib/librte_vhost/virtio_net.c\n@@ -2134,6 +2134,24 @@ virtio_dev_extbuf_alloc(struct rte_mbuf *pkt, uint32_t size)\n \treturn 0;\n }\n \n+static __rte_always_inline int\n+virtio_dev_pktmbuf_prep(struct virtio_net *dev, struct rte_mbuf *pkt,\n+\t\t\t uint32_t data_len)\n+{\n+\tif (rte_pktmbuf_tailroom(pkt) >= data_len)\n+\t\treturn 0;\n+\n+\t/* attach an external buffer if supported */\n+\tif (dev->extbuf && !virtio_dev_extbuf_alloc(pkt, data_len))\n+\t\treturn 0;\n+\n+\t/* check if chained buffers are allowed */\n+\tif (!dev->linearbuf)\n+\t\treturn 0;\n+\n+\treturn -1;\n+}\n+\n /*\n * Allocate a host supported pktmbuf.\n */\n@@ -2149,23 +2167,15 @@ virtio_dev_pktmbuf_alloc(struct virtio_net *dev, struct rte_mempool *mp,\n \t\treturn NULL;\n \t}\n \n-\tif (rte_pktmbuf_tailroom(pkt) >= data_len)\n-\t\treturn pkt;\n+\tif (virtio_dev_pktmbuf_prep(dev, pkt, data_len)) {\n+\t\t/* Data doesn't fit into the buffer and the host supports\n+\t\t * only linear buffers\n+\t\t */\n+\t\trte_pktmbuf_free(pkt);\n+\t\treturn NULL;\n+\t}\n \n-\t/* attach an external buffer if supported */\n-\tif (dev->extbuf && !virtio_dev_extbuf_alloc(pkt, data_len))\n-\t\treturn pkt;\n-\n-\t/* check if chained buffers are allowed */\n-\tif (!dev->linearbuf)\n-\t\treturn pkt;\n-\n-\t/* Data doesn't fit into the buffer and the host supports\n-\t * only linear buffers\n-\t */\n-\trte_pktmbuf_free(pkt);\n-\n-\treturn NULL;\n+\treturn pkt;\n }\n \n static __rte_noinline uint16_t\n@@ -2261,7 +2271,6 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,\n static __rte_always_inline int\n vhost_reserve_avail_batch_packed(struct virtio_net *dev,\n \t\t\t\t struct vhost_virtqueue *vq,\n-\t\t\t\t struct rte_mempool *mbuf_pool,\n \t\t\t\t struct rte_mbuf **pkts,\n \t\t\t\t uint16_t avail_idx,\n \t\t\t\t uintptr_t *desc_addrs,\n@@ -2306,9 +2315,8 @@ vhost_reserve_avail_batch_packed(struct virtio_net *dev,\n \t}\n \n \tvhost_for_each_try_unroll(i, 0, PACKED_BATCH_SIZE) {\n-\t\tpkts[i] = virtio_dev_pktmbuf_alloc(dev, mbuf_pool, lens[i]);\n-\t\tif (!pkts[i])\n-\t\t\tgoto free_buf;\n+\t\tif (virtio_dev_pktmbuf_prep(dev, pkts[i], lens[i]))\n+\t\t\tgoto err;\n \t}\n \n \tvhost_for_each_try_unroll(i, 0, PACKED_BATCH_SIZE)\n@@ -2316,7 +2324,7 @@ vhost_reserve_avail_batch_packed(struct virtio_net *dev,\n \n \tvhost_for_each_try_unroll(i, 0, PACKED_BATCH_SIZE) {\n \t\tif (unlikely(buf_lens[i] < (lens[i] - buf_offset)))\n-\t\t\tgoto free_buf;\n+\t\t\tgoto err;\n \t}\n \n \tvhost_for_each_try_unroll(i, 0, PACKED_BATCH_SIZE) {\n@@ -2327,17 +2335,13 @@ vhost_reserve_avail_batch_packed(struct virtio_net *dev,\n \n \treturn 0;\n \n-free_buf:\n-\tfor (i = 0; i < PACKED_BATCH_SIZE; i++)\n-\t\trte_pktmbuf_free(pkts[i]);\n-\n+err:\n \treturn -1;\n }\n \n static __rte_always_inline int\n virtio_dev_tx_batch_packed(struct virtio_net *dev,\n \t\t\t struct vhost_virtqueue *vq,\n-\t\t\t struct rte_mempool *mbuf_pool,\n \t\t\t struct rte_mbuf **pkts)\n {\n \tuint16_t avail_idx = vq->last_avail_idx;\n@@ -2347,8 +2351,8 @@ virtio_dev_tx_batch_packed(struct virtio_net *dev,\n \tuint16_t ids[PACKED_BATCH_SIZE];\n \tuint16_t i;\n \n-\tif (vhost_reserve_avail_batch_packed(dev, vq, mbuf_pool, pkts,\n-\t\t\t\t\t avail_idx, desc_addrs, ids))\n+\tif (vhost_reserve_avail_batch_packed(dev, vq, pkts, avail_idx,\n+\t\t\t\t\t desc_addrs, ids))\n \t\treturn -1;\n \n \tvhost_for_each_try_unroll(i, 0, PACKED_BATCH_SIZE)\n@@ -2381,7 +2385,7 @@ static __rte_always_inline int\n vhost_dequeue_single_packed(struct virtio_net *dev,\n \t\t\t struct vhost_virtqueue *vq,\n \t\t\t struct rte_mempool *mbuf_pool,\n-\t\t\t struct rte_mbuf **pkts,\n+\t\t\t struct rte_mbuf *pkts,\n \t\t\t uint16_t *buf_id,\n \t\t\t uint16_t *desc_count)\n {\n@@ -2398,8 +2402,7 @@ vhost_dequeue_single_packed(struct virtio_net *dev,\n \t\t\t\t\t VHOST_ACCESS_RO) < 0))\n \t\treturn -1;\n \n-\t*pkts = virtio_dev_pktmbuf_alloc(dev, mbuf_pool, buf_len);\n-\tif (unlikely(*pkts == NULL)) {\n+\tif (unlikely(virtio_dev_pktmbuf_prep(dev, pkts, buf_len))) {\n \t\tif (!allocerr_warned) {\n \t\t\tVHOST_LOG_DATA(ERR,\n \t\t\t\t\"Failed mbuf alloc of size %d from %s on %s.\\n\",\n@@ -2409,7 +2412,7 @@ vhost_dequeue_single_packed(struct virtio_net *dev,\n \t\treturn -1;\n \t}\n \n-\terr = copy_desc_to_mbuf(dev, vq, buf_vec, nr_vec, *pkts,\n+\terr = copy_desc_to_mbuf(dev, vq, buf_vec, nr_vec, pkts,\n \t\t\t\tmbuf_pool);\n \tif (unlikely(err)) {\n \t\tif (!allocerr_warned) {\n@@ -2418,7 +2421,6 @@ vhost_dequeue_single_packed(struct virtio_net *dev,\n \t\t\t\tdev->ifname);\n \t\t\tallocerr_warned = true;\n \t\t}\n-\t\trte_pktmbuf_free(*pkts);\n \t\treturn -1;\n \t}\n \n@@ -2429,7 +2431,7 @@ static __rte_always_inline int\n virtio_dev_tx_single_packed(struct virtio_net *dev,\n \t\t\t struct vhost_virtqueue *vq,\n \t\t\t struct rte_mempool *mbuf_pool,\n-\t\t\t struct rte_mbuf **pkts)\n+\t\t\t struct rte_mbuf *pkts)\n {\n \n \tuint16_t buf_id, desc_count = 0;\n@@ -2462,11 +2464,14 @@ virtio_dev_tx_packed(struct virtio_net *dev,\n \tuint32_t pkt_idx = 0;\n \tuint32_t remained = count;\n \n+\tif (rte_pktmbuf_alloc_bulk(mbuf_pool, pkts, count))\n+\t\treturn 0;\n+\n \tdo {\n \t\trte_prefetch0(&vq->desc_packed[vq->last_avail_idx]);\n \n \t\tif (remained >= PACKED_BATCH_SIZE) {\n-\t\t\tif (!virtio_dev_tx_batch_packed(dev, vq, mbuf_pool,\n+\t\t\tif (!virtio_dev_tx_batch_packed(dev, vq,\n \t\t\t\t\t\t\t&pkts[pkt_idx])) {\n \t\t\t\tpkt_idx += PACKED_BATCH_SIZE;\n \t\t\t\tremained -= PACKED_BATCH_SIZE;\n@@ -2475,13 +2480,16 @@ virtio_dev_tx_packed(struct virtio_net *dev,\n \t\t}\n \n \t\tif (virtio_dev_tx_single_packed(dev, vq, mbuf_pool,\n-\t\t\t\t\t\t&pkts[pkt_idx]))\n+\t\t\t\t\t\tpkts[pkt_idx]))\n \t\t\tbreak;\n \t\tpkt_idx++;\n \t\tremained--;\n \n \t} while (remained);\n \n+\tif (pkt_idx != count)\n+\t\trte_pktmbuf_free_bulk(&pkts[pkt_idx], count - pkt_idx);\n+\n \tif (vq->shadow_used_idx) {\n \t\tdo_data_copy_dequeue(vq);\n \n", "prefixes": [ "v5" ] }{ "id": 91634, "url": "