Message ID | 20210323090219.126712-1-maxime.coquelin@redhat.com (mailing list archive) |
---|---|
Headers |
Return-Path: <dev-bounces@dpdk.org> X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 69091A0562; Tue, 23 Mar 2021 10:02:35 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E67C64014D; Tue, 23 Mar 2021 10:02:34 +0100 (CET) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mails.dpdk.org (Postfix) with ESMTP id 71FD940143 for <dev@dpdk.org>; Tue, 23 Mar 2021 10:02:33 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1616490152; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=8V6Sl9HhVjBg+bOglUrzrSuoimxVsphYDEHm6ZUl1AM=; b=QiB1QfiagqWUcyjNBPLOfg92PqHZs+llNxYP7+W2NprL9lE+lPv3injZ3drvL9dzZslH5g up4pXO24PdnWG2UwK3fkMjhlFqP4cRQH9QlY5OVOGLAhPqSk/lXw031cMg7o6bP7G/H5nJ SdD0Cz1nUPgyzUwplDG1mzixu/TAuoM= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-140-YeeR6U99M76XxvKe_k_GXw-1; Tue, 23 Mar 2021 05:02:31 -0400 X-MC-Unique: YeeR6U99M76XxvKe_k_GXw-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id E519080006E; Tue, 23 Mar 2021 09:02:29 +0000 (UTC) Received: from max-t490s.redhat.com (unknown [10.36.110.41]) by smtp.corp.redhat.com (Postfix) with ESMTP id 0423B60BD8; Tue, 23 Mar 2021 09:02:22 +0000 (UTC) From: Maxime Coquelin <maxime.coquelin@redhat.com> To: dev@dpdk.org, chenbo.xia@intel.com, amorenoz@redhat.com, david.marchand@redhat.com, olivier.matz@6wind.com, bnemeth@redhat.com Cc: Maxime Coquelin <maxime.coquelin@redhat.com> Date: Tue, 23 Mar 2021 10:02:16 +0100 Message-Id: <20210323090219.126712-1-maxime.coquelin@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=maxime.coquelin@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset="US-ASCII" Subject: [dpdk-dev] [PATCH v4 0/3] vhost: make virtqueue cache-friendly X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions <dev.dpdk.org> List-Unsubscribe: <https://mails.dpdk.org/options/dev>, <mailto:dev-request@dpdk.org?subject=unsubscribe> List-Archive: <http://mails.dpdk.org/archives/dev/> List-Post: <mailto:dev@dpdk.org> List-Help: <mailto:dev-request@dpdk.org?subject=help> List-Subscribe: <https://mails.dpdk.org/listinfo/dev>, <mailto:dev-request@dpdk.org?subject=subscribe> Errors-To: dev-bounces@dpdk.org Sender: "dev" <dev-bounces@dpdk.org> |
Series |
vhost: make virtqueue cache-friendly
|
|
Message
Maxime Coquelin
March 23, 2021, 9:02 a.m. UTC
As done for Virtio PMD, this series improves cache utilization of the vhost_virtqueue struct by removing unused field, make the live-migration cache dynamically allocated at live-migration setup time and by moving fields around so that hot fields are on the first cachelines. With this series, The struct vhost_virtqueue size goes from 832B (13 cachelines) down to 320B (5 cachelines). With this series and the virtio one, I measure a gain of up to 8% in IO loop micro-benchmark with packed ring, and 5% with split ring. I don't have a setup at hand to run PVP testing, but it might be interresting to get the numbers as I suspect the cache pressure is higher in this test as in real use-cases. Changes in v4: ============== - Fix missing changes to boolean (Chenbo) Changes in v3: ============== - Don't check pointer validity before freeing (David) - Don't use deprecated rte_smp_wmb() (David, Checkpatch) - Handle booleans properly (David) - Prevent VQ size field overflow (David) - Fix typo and indent (David) Changes in v2: ============== - Add log_cache freeing in free_vq (Chenbo) Maxime Coquelin (3): vhost: remove unused Vhost virtqueue field vhost: move dirty logging cache out of the virtqueue vhost: optimize vhost virtqueue struct lib/librte_vhost/vhost.c | 21 +++++++++---- lib/librte_vhost/vhost.h | 56 +++++++++++++++++------------------ lib/librte_vhost/vhost_user.c | 44 +++++++++++++++++++-------- lib/librte_vhost/virtio_net.c | 12 ++++---- 4 files changed, 82 insertions(+), 51 deletions(-)
Comments
On Tue, Mar 23, 2021 at 10:02 AM Maxime Coquelin <maxime.coquelin@redhat.com> wrote: > > As done for Virtio PMD, this series improves cache utilization > of the vhost_virtqueue struct by removing unused field, > make the live-migration cache dynamically allocated at > live-migration setup time and by moving fields > around so that hot fields are on the first cachelines. > > With this series, The struct vhost_virtqueue size goes > from 832B (13 cachelines) down to 320B (5 cachelines). > > With this series and the virtio one, I measure a gain > of up to 8% in IO loop micro-benchmark with packed > ring, and 5% with split ring. > > I don't have a setup at hand to run PVP testing, but > it might be interresting to get the numbers as I > suspect the cache pressure is higher in this test as > in real use-cases. > > Changes in v4: > ============== > - Fix missing changes to boolean (Chenbo) > For the series, Reviewed-by: David Marchand <david.marchand@redhat.com> Merci !
On Tue, 2021-03-23 at 11:30 +0100, David Marchand wrote: > On Tue, Mar 23, 2021 at 10:02 AM Maxime Coquelin > <maxime.coquelin@redhat.com> wrote: > > > > As done for Virtio PMD, this series improves cache utilization > > of the vhost_virtqueue struct by removing unused field, > > make the live-migration cache dynamically allocated at > > live-migration setup time and by moving fields > > around so that hot fields are on the first cachelines. > > > > With this series, The struct vhost_virtqueue size goes > > from 832B (13 cachelines) down to 320B (5 cachelines). > > > > With this series and the virtio one, I measure a gain > > of up to 8% in IO loop micro-benchmark with packed > > ring, and 5% with split ring. > > > > I don't have a setup at hand to run PVP testing, but > > it might be interresting to get the numbers as I > > suspect the cache pressure is higher in this test as > > in real use-cases. > > > > Changes in v4: > > ============== > > - Fix missing changes to boolean (Chenbo) > > > > For the series, > Reviewed-by: David Marchand <david.marchand@redhat.com> > > Merci ! > > Tested this in a PVP setup on ARM, giving a slight improvement in performance. For the series: Tested-by: Balazs Nemeth <bnemeth@redhat.com>
> -----Original Message----- > From: Maxime Coquelin <maxime.coquelin@redhat.com> > Sent: Tuesday, March 23, 2021 5:02 PM > To: dev@dpdk.org; Xia, Chenbo <chenbo.xia@intel.com>; amorenoz@redhat.com; > david.marchand@redhat.com; olivier.matz@6wind.com; bnemeth@redhat.com > Cc: Maxime Coquelin <maxime.coquelin@redhat.com> > Subject: [PATCH v4 0/3] vhost: make virtqueue cache-friendly > > As done for Virtio PMD, this series improves cache utilization > of the vhost_virtqueue struct by removing unused field, > make the live-migration cache dynamically allocated at > live-migration setup time and by moving fields > around so that hot fields are on the first cachelines. > > With this series, The struct vhost_virtqueue size goes > from 832B (13 cachelines) down to 320B (5 cachelines). > > With this series and the virtio one, I measure a gain > of up to 8% in IO loop micro-benchmark with packed > ring, and 5% with split ring. > > I don't have a setup at hand to run PVP testing, but > it might be interresting to get the numbers as I > suspect the cache pressure is higher in this test as > in real use-cases. > > Maxime Coquelin (3): > vhost: remove unused Vhost virtqueue field > vhost: move dirty logging cache out of the virtqueue > vhost: optimize vhost virtqueue struct > > lib/librte_vhost/vhost.c | 21 +++++++++---- > lib/librte_vhost/vhost.h | 56 +++++++++++++++++------------------ > lib/librte_vhost/vhost_user.c | 44 +++++++++++++++++++-------- > lib/librte_vhost/virtio_net.c | 12 ++++---- > 4 files changed, 82 insertions(+), 51 deletions(-) > > -- > 2.30.2 Series applied to next-virtio/main, Thanks!