From patchwork Fri Dec 20 03:09:49 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gavin Hu X-Patchwork-Id: 64038 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4A450A04F3; Fri, 20 Dec 2019 04:10:37 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 5678B1BF78; Fri, 20 Dec 2019 04:10:34 +0100 (CET) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by dpdk.org (Postfix) with ESMTP id 5CFC91BF76 for ; Fri, 20 Dec 2019 04:10:32 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D473A31B; Thu, 19 Dec 2019 19:10:31 -0800 (PST) Received: from net-arm-thunderx2-01.test.ast.arm.com (net-arm-thunderx2-01.shanghai.arm.com [10.169.40.68]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 262FF3F719; Thu, 19 Dec 2019 19:10:27 -0800 (PST) From: Gavin Hu To: dev@dpdk.org Cc: nd@arm.com, david.marchand@redhat.com, thomas@monjalon.net, rasland@mellanox.com, maxime.coquelin@redhat.com, tiwei.bie@intel.com, hemant.agrawal@nxp.com, jerinj@marvell.com, pbhagavatula@marvell.com, Honnappa.Nagarahalli@arm.com, ruifeng.wang@arm.com, phil.yang@arm.com, joyce.kong@arm.com, steve.capper@arm.com Date: Fri, 20 Dec 2019 11:09:49 +0800 Message-Id: <1576811391-19131-2-git-send-email-gavin.hu@arm.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1576811391-19131-1-git-send-email-gavin.hu@arm.com> References: <1576811391-19131-1-git-send-email-gavin.hu@arm.com> In-Reply-To: <1571758074-16445-1-git-send-email-gavin.hu@arm.com> References: <1571758074-16445-1-git-send-email-gavin.hu@arm.com> Subject: [dpdk-dev] [PATCH v2 1/3] eal/arm64: relax the io barrier for aarch64 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Armv8's peripheral coherence order is a total order on all reads and writes to that peripheral.[1] The peripheral coherence order for a memory-mapped peripheral signifies the order in which accesses arrive at the endpoint. For a read or a write RW1 and a read or a write RW2 to the same peripheral, then RW1 will appear in the peripheral coherence order for the peripheral before RW2 if either of the following cases apply: 1. RW1 and RW2 are accesses using Non-cacheable or Device attributes and RW1 is Ordered-before RW2. 2. RW1 and RW2 are accesses using Device-nGnRE or Device-nGnRnE attributes and RW1 appears in program order before RW2. On arm platforms, all the PCI resources are mapped to nGnRE device memory [2], the above case 2 holds true, that means the peripheral coherence order applies here and just a compiler barrier is sufficient for rte io barriers. [1] Section B2.3.4 of ARMARM, https://developer.arm.com/docs/ddi0487/lates t/arm-architecture-reference-manual-armv8-for-armv8-a-architecture-profile [2] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/ tree/drivers/pci/pci-sysfs.c#n1204 Signed-off-by: Gavin Hu Reviewed-by: Steve Capper Reviewed-by: Phil Yang --- lib/librte_eal/common/include/arch/arm/rte_atomic_64.h | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/lib/librte_eal/common/include/arch/arm/rte_atomic_64.h b/lib/librte_eal/common/include/arch/arm/rte_atomic_64.h index 859ae12..fd63956 100644 --- a/lib/librte_eal/common/include/arch/arm/rte_atomic_64.h +++ b/lib/librte_eal/common/include/arch/arm/rte_atomic_64.h @@ -34,11 +34,11 @@ extern "C" { #define rte_smp_rmb() dmb(ishld) -#define rte_io_mb() rte_mb() +#define rte_io_mb() rte_compiler_barrier() -#define rte_io_wmb() rte_wmb() +#define rte_io_wmb() rte_compiler_barrier() -#define rte_io_rmb() rte_rmb() +#define rte_io_rmb() rte_compiler_barrier() #define rte_cio_wmb() dmb(oshst) From patchwork Fri Dec 20 03:09:50 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Gavin Hu X-Patchwork-Id: 64039 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 664F8A04F3; Fri, 20 Dec 2019 04:10:45 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 23D651BF82; Fri, 20 Dec 2019 04:10:38 +0100 (CET) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by dpdk.org (Postfix) with ESMTP id 68C481BF80 for ; Fri, 20 Dec 2019 04:10:36 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id F0E7430E; Thu, 19 Dec 2019 19:10:35 -0800 (PST) Received: from net-arm-thunderx2-01.test.ast.arm.com (net-arm-thunderx2-01.shanghai.arm.com [10.169.40.68]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 495583F719; Thu, 19 Dec 2019 19:10:32 -0800 (PST) From: Gavin Hu To: dev@dpdk.org Cc: nd@arm.com, david.marchand@redhat.com, thomas@monjalon.net, rasland@mellanox.com, maxime.coquelin@redhat.com, tiwei.bie@intel.com, hemant.agrawal@nxp.com, jerinj@marvell.com, pbhagavatula@marvell.com, Honnappa.Nagarahalli@arm.com, ruifeng.wang@arm.com, phil.yang@arm.com, joyce.kong@arm.com, steve.capper@arm.com Date: Fri, 20 Dec 2019 11:09:50 +0800 Message-Id: <1576811391-19131-3-git-send-email-gavin.hu@arm.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1576811391-19131-1-git-send-email-gavin.hu@arm.com> References: <1576811391-19131-1-git-send-email-gavin.hu@arm.com> In-Reply-To: <1571758074-16445-1-git-send-email-gavin.hu@arm.com> References: <1571758074-16445-1-git-send-email-gavin.hu@arm.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v2 2/3] net/virtio: virtual PCI requires smp barriers X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Other than real PCI reads and writes to the device memory requiring the io barriers, virtual pci memories are normal memory in the smp configuration, which requires the smp barriers. Since the smp barriers and io barriers are identical on x86 and PPC, this change has only effect on aarch64. As far as peripheral coherence order for ‘virtual’ devices, the arch intent is that the Hypervisor view of things takes precedence – since translations are made in holistic manner as the full stage1+stage2 regime, there is no such thing as a transaction taking on “EL1” mapping as far as ordering. If the Hypervisor maps stage2 as Normal but the OS at EL1 maps it as Device-nGnRE, then it’s Normal memory and follows the ordering rules for Normal memory. Signed-off-by: Gavin Hu --- drivers/net/virtio/virtio_pci.c | 108 +++++++++++++++++++++++++++++----------- 1 file changed, 78 insertions(+), 30 deletions(-) diff --git a/drivers/net/virtio/virtio_pci.c b/drivers/net/virtio/virtio_pci.c index 4468e89..64aa0a0 100644 --- a/drivers/net/virtio/virtio_pci.c +++ b/drivers/net/virtio/virtio_pci.c @@ -24,6 +24,54 @@ #define PCI_CAP_ID_VNDR 0x09 #define PCI_CAP_ID_MSIX 0x11 +static __rte_always_inline uint8_t +virtio_pci_read8(const volatile void *addr) +{ + uint8_t val; + val = rte_read8_relaxed(addr); + rte_smp_rmb(); + return val; +} + +static __rte_always_inline uint16_t +virtio_pci_read16(const volatile void *addr) +{ + uint16_t val; + val = rte_read16_relaxed(addr); + rte_smp_rmb(); + return val; +} + +static __rte_always_inline uint32_t +virtio_pci_read32(const volatile void *addr) +{ + uint32_t val; + val = rte_read32_relaxed(addr); + rte_smp_rmb(); + return val; +} + +static __rte_always_inline void +virtio_pci_write8(uint8_t value, volatile void *addr) +{ + rte_smp_wmb(); + rte_write8_relaxed(value, addr); +} + +static __rte_always_inline void +virtio_pci_write16(uint16_t value, volatile void *addr) +{ + rte_smp_wmb(); + rte_write16_relaxed(value, addr); +} + +static __rte_always_inline void +virtio_pci_write32(uint32_t value, volatile void *addr) +{ + rte_smp_wmb(); + rte_write32_relaxed(value, addr); +} + /* * The remaining space is defined by each driver as the per-driver * configuration space. @@ -260,8 +308,8 @@ const struct virtio_pci_ops legacy_ops = { static inline void io_write64_twopart(uint64_t val, uint32_t *lo, uint32_t *hi) { - rte_write32(val & ((1ULL << 32) - 1), lo); - rte_write32(val >> 32, hi); + virtio_pci_write32(val & ((1ULL << 32) - 1), lo); + virtio_pci_write32(val >> 32, hi); } static void @@ -273,13 +321,13 @@ modern_read_dev_config(struct virtio_hw *hw, size_t offset, uint8_t old_gen, new_gen; do { - old_gen = rte_read8(&hw->common_cfg->config_generation); + old_gen = virtio_pci_read8(&hw->common_cfg->config_generation); p = dst; for (i = 0; i < length; i++) - *p++ = rte_read8((uint8_t *)hw->dev_cfg + offset + i); + *p++ = virtio_pci_read8((uint8_t *)hw->dev_cfg + offset + i); - new_gen = rte_read8(&hw->common_cfg->config_generation); + new_gen = virtio_pci_read8(&hw->common_cfg->config_generation); } while (old_gen != new_gen); } @@ -291,7 +339,7 @@ modern_write_dev_config(struct virtio_hw *hw, size_t offset, const uint8_t *p = src; for (i = 0; i < length; i++) - rte_write8((*p++), (((uint8_t *)hw->dev_cfg) + offset + i)); + virtio_pci_write8((*p++), (((uint8_t *)hw->dev_cfg) + offset + i)); } static uint64_t @@ -299,11 +347,11 @@ modern_get_features(struct virtio_hw *hw) { uint32_t features_lo, features_hi; - rte_write32(0, &hw->common_cfg->device_feature_select); - features_lo = rte_read32(&hw->common_cfg->device_feature); + virtio_pci_write32(0, &hw->common_cfg->device_feature_select); + features_lo = virtio_pci_read32(&hw->common_cfg->device_feature); - rte_write32(1, &hw->common_cfg->device_feature_select); - features_hi = rte_read32(&hw->common_cfg->device_feature); + virtio_pci_write32(1, &hw->common_cfg->device_feature_select); + features_hi = virtio_pci_read32(&hw->common_cfg->device_feature); return ((uint64_t)features_hi << 32) | features_lo; } @@ -311,53 +359,53 @@ modern_get_features(struct virtio_hw *hw) static void modern_set_features(struct virtio_hw *hw, uint64_t features) { - rte_write32(0, &hw->common_cfg->guest_feature_select); - rte_write32(features & ((1ULL << 32) - 1), + virtio_pci_write32(0, &hw->common_cfg->guest_feature_select); + virtio_pci_write32(features & ((1ULL << 32) - 1), &hw->common_cfg->guest_feature); - rte_write32(1, &hw->common_cfg->guest_feature_select); - rte_write32(features >> 32, + virtio_pci_write32(1, &hw->common_cfg->guest_feature_select); + virtio_pci_write32(features >> 32, &hw->common_cfg->guest_feature); } static uint8_t modern_get_status(struct virtio_hw *hw) { - return rte_read8(&hw->common_cfg->device_status); + return virtio_pci_read8(&hw->common_cfg->device_status); } static void modern_set_status(struct virtio_hw *hw, uint8_t status) { - rte_write8(status, &hw->common_cfg->device_status); + virtio_pci_write8(status, &hw->common_cfg->device_status); } static uint8_t modern_get_isr(struct virtio_hw *hw) { - return rte_read8(hw->isr); + return virtio_pci_read8(hw->isr); } static uint16_t modern_set_config_irq(struct virtio_hw *hw, uint16_t vec) { - rte_write16(vec, &hw->common_cfg->msix_config); - return rte_read16(&hw->common_cfg->msix_config); + virtio_pci_write16(vec, &hw->common_cfg->msix_config); + return virtio_pci_read16(&hw->common_cfg->msix_config); } static uint16_t modern_set_queue_irq(struct virtio_hw *hw, struct virtqueue *vq, uint16_t vec) { - rte_write16(vq->vq_queue_index, &hw->common_cfg->queue_select); - rte_write16(vec, &hw->common_cfg->queue_msix_vector); - return rte_read16(&hw->common_cfg->queue_msix_vector); + virtio_pci_write16(vq->vq_queue_index, &hw->common_cfg->queue_select); + virtio_pci_write16(vec, &hw->common_cfg->queue_msix_vector); + return virtio_pci_read16(&hw->common_cfg->queue_msix_vector); } static uint16_t modern_get_queue_num(struct virtio_hw *hw, uint16_t queue_id) { - rte_write16(queue_id, &hw->common_cfg->queue_select); - return rte_read16(&hw->common_cfg->queue_size); + virtio_pci_write16(queue_id, &hw->common_cfg->queue_select); + return virtio_pci_read16(&hw->common_cfg->queue_size); } static int @@ -375,7 +423,7 @@ modern_setup_queue(struct virtio_hw *hw, struct virtqueue *vq) ring[vq->vq_nentries]), VIRTIO_PCI_VRING_ALIGN); - rte_write16(vq->vq_queue_index, &hw->common_cfg->queue_select); + virtio_pci_write16(vq->vq_queue_index, &hw->common_cfg->queue_select); io_write64_twopart(desc_addr, &hw->common_cfg->queue_desc_lo, &hw->common_cfg->queue_desc_hi); @@ -384,11 +432,11 @@ modern_setup_queue(struct virtio_hw *hw, struct virtqueue *vq) io_write64_twopart(used_addr, &hw->common_cfg->queue_used_lo, &hw->common_cfg->queue_used_hi); - notify_off = rte_read16(&hw->common_cfg->queue_notify_off); + notify_off = virtio_pci_read16(&hw->common_cfg->queue_notify_off); vq->notify_addr = (void *)((uint8_t *)hw->notify_base + notify_off * hw->notify_off_multiplier); - rte_write16(1, &hw->common_cfg->queue_enable); + virtio_pci_write16(1, &hw->common_cfg->queue_enable); PMD_INIT_LOG(DEBUG, "queue %u addresses:", vq->vq_queue_index); PMD_INIT_LOG(DEBUG, "\t desc_addr: %" PRIx64, desc_addr); @@ -403,7 +451,7 @@ modern_setup_queue(struct virtio_hw *hw, struct virtqueue *vq) static void modern_del_queue(struct virtio_hw *hw, struct virtqueue *vq) { - rte_write16(vq->vq_queue_index, &hw->common_cfg->queue_select); + virtio_pci_write16(vq->vq_queue_index, &hw->common_cfg->queue_select); io_write64_twopart(0, &hw->common_cfg->queue_desc_lo, &hw->common_cfg->queue_desc_hi); @@ -412,13 +460,13 @@ modern_del_queue(struct virtio_hw *hw, struct virtqueue *vq) io_write64_twopart(0, &hw->common_cfg->queue_used_lo, &hw->common_cfg->queue_used_hi); - rte_write16(0, &hw->common_cfg->queue_enable); + virtio_pci_write16(0, &hw->common_cfg->queue_enable); } static void modern_notify_queue(struct virtio_hw *hw __rte_unused, struct virtqueue *vq) { - rte_write16(vq->vq_queue_index, vq->notify_addr); + virtio_pci_write16(vq->vq_queue_index, vq->notify_addr); } const struct virtio_pci_ops modern_ops = { From patchwork Fri Dec 20 03:09:51 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Gavin Hu X-Patchwork-Id: 64040 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id DB6DEA04F3; Fri, 20 Dec 2019 04:10:54 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id BC3891BF76; Fri, 20 Dec 2019 04:10:42 +0100 (CET) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by dpdk.org (Postfix) with ESMTP id D4DC51BDFD for ; Fri, 20 Dec 2019 04:10:40 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 52D3130E; Thu, 19 Dec 2019 19:10:40 -0800 (PST) Received: from net-arm-thunderx2-01.test.ast.arm.com (net-arm-thunderx2-01.shanghai.arm.com [10.169.40.68]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 65DC63F719; Thu, 19 Dec 2019 19:10:36 -0800 (PST) From: Gavin Hu To: dev@dpdk.org Cc: nd@arm.com, david.marchand@redhat.com, thomas@monjalon.net, rasland@mellanox.com, maxime.coquelin@redhat.com, tiwei.bie@intel.com, hemant.agrawal@nxp.com, jerinj@marvell.com, pbhagavatula@marvell.com, Honnappa.Nagarahalli@arm.com, ruifeng.wang@arm.com, phil.yang@arm.com, joyce.kong@arm.com, steve.capper@arm.com Date: Fri, 20 Dec 2019 11:09:51 +0800 Message-Id: <1576811391-19131-4-git-send-email-gavin.hu@arm.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1576811391-19131-1-git-send-email-gavin.hu@arm.com> References: <1576811391-19131-1-git-send-email-gavin.hu@arm.com> In-Reply-To: <1571758074-16445-1-git-send-email-gavin.hu@arm.com> References: <1571758074-16445-1-git-send-email-gavin.hu@arm.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v2 3/3] crypto/virtio: virtual PCI requires smp barriers X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Other than real PCI reads and writes to the device memory requiring the io barriers, virtual pci memories are normal memory in the smp configuration, and requires the smp barriers. Since the smp barriers and io barriers are identical on x86 and PPC, this change has only effect on aarch64. As far as peripheral coherence order for ‘virtual’ devices, the arch intent is that the Hypervisor view of things takes precedence – since translations are made in holistic manner as the full stage1+stage2 regime, there is no such thing as a transaction taking on “EL1” mapping as far as ordering. If the Hypervisor maps stage2 as Normal but the OS at EL1 maps it as Device-nGnRE, then it’s Normal memory and follows the ordering rules for Normal memory. Signed-off-by: Gavin Hu --- drivers/crypto/virtio/virtio_pci.c | 108 ++++++++++++++++++++++++++----------- 1 file changed, 78 insertions(+), 30 deletions(-) diff --git a/drivers/crypto/virtio/virtio_pci.c b/drivers/crypto/virtio/virtio_pci.c index 8137b3c..dd8eda8 100644 --- a/drivers/crypto/virtio/virtio_pci.c +++ b/drivers/crypto/virtio/virtio_pci.c @@ -24,6 +24,54 @@ #define PCI_CAP_ID_VNDR 0x09 #define PCI_CAP_ID_MSIX 0x11 +static __rte_always_inline uint8_t +virtio_pci_read8(const volatile void *addr) +{ + uint8_t val; + val = rte_read8_relaxed(addr); + rte_smp_rmb(); + return val; +} + +static __rte_always_inline uint16_t +virtio_pci_read16(const volatile void *addr) +{ + uint16_t val; + val = rte_read16_relaxed(addr); + rte_smp_rmb(); + return val; +} + +static __rte_always_inline uint32_t +virtio_pci_read32(const volatile void *addr) +{ + uint32_t val; + val = rte_read32_relaxed(addr); + rte_smp_rmb(); + return val; +} + +static __rte_always_inline void +virtio_pci_write8(uint8_t value, volatile void *addr) +{ + rte_smp_wmb(); + rte_write8_relaxed(value, addr); +} + +static __rte_always_inline void +virtio_pci_write16(uint16_t value, volatile void *addr) +{ + rte_smp_wmb(); + rte_write16_relaxed(value, addr); +} + +static __rte_always_inline void +virtio_pci_write32(uint32_t value, volatile void *addr) +{ + rte_smp_wmb(); + rte_write32_relaxed(value, addr); +} + /* * The remaining space is defined by each driver as the per-driver * configuration space. @@ -52,8 +100,8 @@ check_vq_phys_addr_ok(struct virtqueue *vq) static inline void io_write64_twopart(uint64_t val, uint32_t *lo, uint32_t *hi) { - rte_write32(val & ((1ULL << 32) - 1), lo); - rte_write32(val >> 32, hi); + virtio_pci_write32(val & ((1ULL << 32) - 1), lo); + virtio_pci_write32(val >> 32, hi); } static void @@ -65,13 +113,13 @@ modern_read_dev_config(struct virtio_crypto_hw *hw, size_t offset, uint8_t old_gen, new_gen; do { - old_gen = rte_read8(&hw->common_cfg->config_generation); + old_gen = virtio_pci_read8(&hw->common_cfg->config_generation); p = dst; for (i = 0; i < length; i++) - *p++ = rte_read8((uint8_t *)hw->dev_cfg + offset + i); + *p++ = virtio_pci_read8((uint8_t *)hw->dev_cfg + offset + i); - new_gen = rte_read8(&hw->common_cfg->config_generation); + new_gen = virtio_pci_read8(&hw->common_cfg->config_generation); } while (old_gen != new_gen); } @@ -83,7 +131,7 @@ modern_write_dev_config(struct virtio_crypto_hw *hw, size_t offset, const uint8_t *p = src; for (i = 0; i < length; i++) - rte_write8((*p++), (((uint8_t *)hw->dev_cfg) + offset + i)); + virtio_pci_write8((*p++), (((uint8_t *)hw->dev_cfg) + offset + i)); } static uint64_t @@ -91,11 +139,11 @@ modern_get_features(struct virtio_crypto_hw *hw) { uint32_t features_lo, features_hi; - rte_write32(0, &hw->common_cfg->device_feature_select); - features_lo = rte_read32(&hw->common_cfg->device_feature); + virtio_pci_write32(0, &hw->common_cfg->device_feature_select); + features_lo = virtio_pci_read32(&hw->common_cfg->device_feature); - rte_write32(1, &hw->common_cfg->device_feature_select); - features_hi = rte_read32(&hw->common_cfg->device_feature); + virtio_pci_write32(1, &hw->common_cfg->device_feature_select); + features_hi = virtio_pci_read32(&hw->common_cfg->device_feature); return ((uint64_t)features_hi << 32) | features_lo; } @@ -103,25 +151,25 @@ modern_get_features(struct virtio_crypto_hw *hw) static void modern_set_features(struct virtio_crypto_hw *hw, uint64_t features) { - rte_write32(0, &hw->common_cfg->guest_feature_select); - rte_write32(features & ((1ULL << 32) - 1), + virtio_pci_write32(0, &hw->common_cfg->guest_feature_select); + virtio_pci_write32(features & ((1ULL << 32) - 1), &hw->common_cfg->guest_feature); - rte_write32(1, &hw->common_cfg->guest_feature_select); - rte_write32(features >> 32, + virtio_pci_write32(1, &hw->common_cfg->guest_feature_select); + virtio_pci_write32(features >> 32, &hw->common_cfg->guest_feature); } static uint8_t modern_get_status(struct virtio_crypto_hw *hw) { - return rte_read8(&hw->common_cfg->device_status); + return virtio_pci_read8(&hw->common_cfg->device_status); } static void modern_set_status(struct virtio_crypto_hw *hw, uint8_t status) { - rte_write8(status, &hw->common_cfg->device_status); + virtio_pci_write8(status, &hw->common_cfg->device_status); } static void @@ -134,30 +182,30 @@ modern_reset(struct virtio_crypto_hw *hw) static uint8_t modern_get_isr(struct virtio_crypto_hw *hw) { - return rte_read8(hw->isr); + return virtio_pci_read8(hw->isr); } static uint16_t modern_set_config_irq(struct virtio_crypto_hw *hw, uint16_t vec) { - rte_write16(vec, &hw->common_cfg->msix_config); - return rte_read16(&hw->common_cfg->msix_config); + virtio_pci_write16(vec, &hw->common_cfg->msix_config); + return virtio_pci_read16(&hw->common_cfg->msix_config); } static uint16_t modern_set_queue_irq(struct virtio_crypto_hw *hw, struct virtqueue *vq, uint16_t vec) { - rte_write16(vq->vq_queue_index, &hw->common_cfg->queue_select); - rte_write16(vec, &hw->common_cfg->queue_msix_vector); - return rte_read16(&hw->common_cfg->queue_msix_vector); + virtio_pci_write16(vq->vq_queue_index, &hw->common_cfg->queue_select); + virtio_pci_write16(vec, &hw->common_cfg->queue_msix_vector); + return virtio_pci_read16(&hw->common_cfg->queue_msix_vector); } static uint16_t modern_get_queue_num(struct virtio_crypto_hw *hw, uint16_t queue_id) { - rte_write16(queue_id, &hw->common_cfg->queue_select); - return rte_read16(&hw->common_cfg->queue_size); + virtio_pci_write16(queue_id, &hw->common_cfg->queue_select); + return virtio_pci_read16(&hw->common_cfg->queue_size); } static int @@ -175,7 +223,7 @@ modern_setup_queue(struct virtio_crypto_hw *hw, struct virtqueue *vq) ring[vq->vq_nentries]), VIRTIO_PCI_VRING_ALIGN); - rte_write16(vq->vq_queue_index, &hw->common_cfg->queue_select); + virtio_pci_write16(vq->vq_queue_index, &hw->common_cfg->queue_select); io_write64_twopart(desc_addr, &hw->common_cfg->queue_desc_lo, &hw->common_cfg->queue_desc_hi); @@ -184,11 +232,11 @@ modern_setup_queue(struct virtio_crypto_hw *hw, struct virtqueue *vq) io_write64_twopart(used_addr, &hw->common_cfg->queue_used_lo, &hw->common_cfg->queue_used_hi); - notify_off = rte_read16(&hw->common_cfg->queue_notify_off); + notify_off = virtio_pci_read16(&hw->common_cfg->queue_notify_off); vq->notify_addr = (void *)((uint8_t *)hw->notify_base + notify_off * hw->notify_off_multiplier); - rte_write16(1, &hw->common_cfg->queue_enable); + virtio_pci_write16(1, &hw->common_cfg->queue_enable); VIRTIO_CRYPTO_INIT_LOG_DBG("queue %u addresses:", vq->vq_queue_index); VIRTIO_CRYPTO_INIT_LOG_DBG("\t desc_addr: %" PRIx64, desc_addr); @@ -203,7 +251,7 @@ modern_setup_queue(struct virtio_crypto_hw *hw, struct virtqueue *vq) static void modern_del_queue(struct virtio_crypto_hw *hw, struct virtqueue *vq) { - rte_write16(vq->vq_queue_index, &hw->common_cfg->queue_select); + virtio_pci_write16(vq->vq_queue_index, &hw->common_cfg->queue_select); io_write64_twopart(0, &hw->common_cfg->queue_desc_lo, &hw->common_cfg->queue_desc_hi); @@ -212,14 +260,14 @@ modern_del_queue(struct virtio_crypto_hw *hw, struct virtqueue *vq) io_write64_twopart(0, &hw->common_cfg->queue_used_lo, &hw->common_cfg->queue_used_hi); - rte_write16(0, &hw->common_cfg->queue_enable); + virtio_pci_write16(0, &hw->common_cfg->queue_enable); } static void modern_notify_queue(struct virtio_crypto_hw *hw __rte_unused, struct virtqueue *vq) { - rte_write16(vq->vq_queue_index, vq->notify_addr); + virtio_pci_write16(vq->vq_queue_index, vq->notify_addr); } const struct virtio_pci_ops virtio_crypto_modern_ops = {